Feb 16 12:52:51 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 12:52:51 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.042399 4740 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046198 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046222 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046232 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046240 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046247 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046253 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046259 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046268 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046282 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046289 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046295 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046301 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046307 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046313 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046318 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046323 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046329 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046334 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046341 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046347 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046354 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046361 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046368 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046375 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046382 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046388 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046395 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046402 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046408 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046415 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046424 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046434 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046443 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046450 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046460 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046468 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046475 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046485 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046494 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046502 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046510 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046516 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046521 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046527 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046533 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046538 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046544 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046549 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046554 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046559 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046565 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046570 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046575 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046580 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048355 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048380 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048387 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048393 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048399 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048405 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048410 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048416 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048423 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048429 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048435 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048442 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048447 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048455 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048461 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048466 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048472 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049326 4740 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049354 4740 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049365 4740 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049374 4740 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049383 4740 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049390 4740 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049399 4740 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049407 4740 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049415 4740 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049423 4740 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049437 4740 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049444 4740 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049450 4740 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049457 4740 flags.go:64] FLAG: --cgroup-root="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049463 4740 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049471 4740 flags.go:64] FLAG: --client-ca-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049478 4740 flags.go:64] FLAG: --cloud-config="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049513 4740 flags.go:64] FLAG: --cloud-provider="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049521 4740 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049531 4740 flags.go:64] FLAG: --cluster-domain="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049537 4740 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049543 4740 flags.go:64] FLAG: --config-dir="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049550 4740 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049558 4740 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049581 4740 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049595 4740 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049604 4740 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049805 4740 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049845 4740 flags.go:64] FLAG: --contention-profiling="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049854 4740 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049862 4740 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049871 4740 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049878 4740 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049886 4740 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049893 4740 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049900 4740 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049906 4740 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049912 4740 flags.go:64] FLAG: --enable-server="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049919 4740 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049927 4740 flags.go:64] FLAG: --event-burst="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049934 4740 flags.go:64] FLAG: --event-qps="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049941 4740 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049947 4740 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049953 4740 flags.go:64] FLAG: --eviction-hard="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049985 4740 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049996 4740 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050004 4740 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050011 4740 flags.go:64] FLAG: --eviction-soft="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050017 4740 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050023 4740 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050030 4740 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050036 4740 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050042 4740 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050048 4740 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050054 4740 flags.go:64] FLAG: --feature-gates="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050061 4740 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050069 4740 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050075 4740 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050081 4740 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050088 4740 flags.go:64] FLAG: --healthz-port="10248" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050095 4740 flags.go:64] FLAG: --help="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050101 4740 flags.go:64] FLAG: --hostname-override="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050108 4740 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050115 4740 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050121 4740 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050128 4740 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050134 4740 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050140 4740 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050146 4740 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050152 4740 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050159 4740 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050165 4740 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050172 4740 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050178 4740 flags.go:64] FLAG: --kube-reserved="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050184 4740 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050190 4740 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050197 4740 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050204 4740 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050211 4740 flags.go:64] FLAG: --lock-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050219 4740 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050226 4740 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050236 4740 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050248 4740 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050254 4740 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050260 4740 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050267 4740 flags.go:64] FLAG: --logging-format="text" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050273 4740 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050280 4740 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050285 4740 flags.go:64] FLAG: --manifest-url="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050292 4740 flags.go:64] FLAG: --manifest-url-header="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050300 4740 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050306 4740 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050314 4740 flags.go:64] FLAG: --max-pods="110" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050320 4740 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050326 4740 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050332 4740 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050338 4740 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050344 4740 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050350 4740 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050356 4740 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050370 4740 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050376 4740 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050383 4740 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050390 4740 flags.go:64] FLAG: --pod-cidr="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050395 4740 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050404 4740 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050411 4740 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050417 4740 flags.go:64] FLAG: --pods-per-core="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050422 4740 flags.go:64] FLAG: --port="10250" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050429 4740 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050435 4740 flags.go:64] FLAG: --provider-id="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050440 4740 flags.go:64] FLAG: --qos-reserved="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050447 4740 flags.go:64] FLAG: --read-only-port="10255" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050453 4740 flags.go:64] FLAG: --register-node="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050460 4740 flags.go:64] FLAG: --register-schedulable="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050466 4740 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050476 4740 flags.go:64] FLAG: --registry-burst="10" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050483 4740 flags.go:64] FLAG: --registry-qps="5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050489 4740 flags.go:64] FLAG: --reserved-cpus="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050495 4740 flags.go:64] FLAG: --reserved-memory="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050502 4740 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050508 4740 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050515 4740 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050521 4740 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050527 4740 flags.go:64] FLAG: --runonce="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050533 4740 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050539 4740 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050545 4740 flags.go:64] FLAG: --seccomp-default="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050551 4740 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050557 4740 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050563 4740 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050570 4740 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050576 4740 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050582 4740 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050588 4740 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050594 4740 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050600 4740 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050606 4740 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050613 4740 flags.go:64] FLAG: --system-cgroups="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050618 4740 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050627 4740 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050633 4740 flags.go:64] FLAG: --tls-cert-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050639 4740 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050647 4740 flags.go:64] FLAG: --tls-min-version="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050653 4740 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050659 4740 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050665 4740 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050671 4740 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050677 4740 flags.go:64] FLAG: --v="2" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050685 4740 flags.go:64] FLAG: --version="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050692 4740 flags.go:64] FLAG: --vmodule="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050699 4740 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050706 4740 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050876 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050884 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050890 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050896 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050902 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050907 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050915 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050921 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050927 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050933 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050939 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050944 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050952 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050958 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050964 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050969 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050974 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050979 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050985 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050990 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050995 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051001 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051006 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051012 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051017 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051022 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051029 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051036 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051042 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051048 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051054 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051059 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051066 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051072 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051077 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051083 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051089 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051094 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051104 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051109 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051115 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051120 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051125 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051130 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051135 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051141 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051146 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051151 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051156 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051161 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051166 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051172 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051177 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051182 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051187 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051193 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051199 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051205 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051212 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051219 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051225 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051232 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051238 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051244 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051249 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051254 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051259 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051264 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051270 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051275 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051283 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.052129 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.064855 4740 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.065276 4740 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065389 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065404 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065447 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065454 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065460 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065467 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065472 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065478 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065483 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065489 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065494 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065499 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065504 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065510 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065515 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065521 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065527 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065534 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065541 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065547 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065555 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065561 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065566 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065572 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065579 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065587 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065593 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065599 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065605 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065613 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065619 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065625 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065631 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065637 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065644 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065649 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065654 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065660 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065666 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065671 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065677 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065682 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065687 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065692 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065699 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065706 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065713 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065722 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065730 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065736 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065742 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065748 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065755 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065761 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065767 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065772 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065777 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065783 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065790 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065796 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065803 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065831 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065838 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065846 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065854 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065861 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065868 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065873 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065880 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065887 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065894 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.065906 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066108 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066122 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066129 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066137 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066144 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066152 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066160 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066167 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066174 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066180 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066185 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066191 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066196 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066202 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066208 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066213 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066220 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066227 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066233 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066240 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066245 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066251 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066257 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066262 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066268 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066274 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066279 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066284 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066290 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066297 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066305 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066311 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066318 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066324 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066330 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066336 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066342 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066349 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066354 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066361 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066368 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066374 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066380 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066386 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066391 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066397 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066402 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066408 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066413 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066419 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066426 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066431 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066437 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066443 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066449 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066455 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066460 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066465 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066471 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066476 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066481 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066486 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066492 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066498 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066505 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066510 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066516 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066521 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066527 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066532 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066537 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.066546 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.066802 4740 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.071983 4740 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.072093 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.074971 4740 server.go:997] "Starting client certificate rotation" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075003 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075272 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 01:10:35.956269496 +0000 UTC Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075432 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.101669 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.105333 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.105719 4740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.128375 4740 log.go:25] "Validated CRI v1 runtime API" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.161858 4740 log.go:25] "Validated CRI v1 image API" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.165242 4740 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.171962 4740 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-12-48-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.172022 4740 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194216 4740 manager.go:217] Machine: {Timestamp:2026-02-16 12:52:53.190132822 +0000 UTC m=+0.566481583 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7ed304a0-359f-427d-948c-1ad2fcad2d68 BootID:16811f3b-c2df-4c7d-9862-6b10264a49b2 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e6:34:a4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e6:34:a4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:53:e3:42 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d4:45:01 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ee:d7:5f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:74:77:94 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:c1:bf:e6:94:28 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:6f:e4:a3:af:f8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194519 4740 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194798 4740 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195348 4740 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195564 4740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195599 4740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195860 4740 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195872 4740 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.196655 4740 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.196710 4740 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.197744 4740 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.198030 4740 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202532 4740 kubelet.go:418] "Attempting to sync node with API server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202572 4740 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202606 4740 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202626 4740 kubelet.go:324] "Adding apiserver pod source" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202645 4740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.207267 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.207345 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.207474 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.207515 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.208891 4740 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.210684 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.213201 4740 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214888 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214936 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214952 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214966 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214986 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215000 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215013 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215034 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215051 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215065 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215084 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215098 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.216130 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.216945 4740 server.go:1280] "Started kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.217788 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218478 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218543 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:24:57.649725814 +0000 UTC Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218687 4740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218704 4740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218759 4740 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218776 4740 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 12:52:53 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218722 4740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.219403 4740 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.220240 4740 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.220293 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.220477 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.220570 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.221880 4740 factory.go:55] Registering systemd factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.221914 4740 factory.go:221] Registration of the systemd container factory successfully Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.222231 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222397 4740 factory.go:153] Registering CRI-O factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222426 4740 factory.go:221] Registration of the crio container factory successfully Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222547 4740 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222583 4740 factory.go:103] Registering Raw factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222616 4740 manager.go:1196] Started watching for new ooms in manager Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.232708 4740 manager.go:319] Starting recovery of all containers Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.233856 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894bb31254a9cd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,LastTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.237247 4740 server.go:460] "Adding debug handlers to kubelet server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243146 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243203 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243220 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243234 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243247 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243261 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243273 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243285 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243322 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243334 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243344 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243356 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243369 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243384 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243396 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243407 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243429 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243441 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243452 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243463 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243473 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243509 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243523 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243535 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243572 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243589 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243606 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243618 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243629 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243641 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243651 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243661 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243674 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243686 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243703 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244921 4740 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244949 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244961 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244975 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244984 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244993 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245002 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245012 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245021 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245030 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245040 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245049 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245059 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245069 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245079 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245089 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245098 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245108 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245120 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245131 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245141 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245150 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245171 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245180 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245189 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245199 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245208 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245217 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245226 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245236 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245250 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245261 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245272 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245285 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245297 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245310 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245321 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245332 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245342 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245389 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245406 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245420 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245432 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245446 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245460 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245474 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245485 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245499 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245510 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245522 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245535 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245546 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245558 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245567 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245580 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245592 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245604 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245615 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245626 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245640 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245652 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245666 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245678 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245690 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245701 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245713 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245726 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245740 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245752 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245766 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245784 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245797 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245826 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245839 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245851 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245865 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245878 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245892 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245905 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245917 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245929 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245940 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245953 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245963 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245975 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245989 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246000 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246012 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246024 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246035 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246046 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246060 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246071 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246085 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246096 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246109 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246121 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246133 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246145 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246157 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246174 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246188 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246199 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246212 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246224 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246236 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246249 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246260 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246272 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246287 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246299 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246310 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246357 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246384 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246396 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246408 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246428 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246442 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246453 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246464 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246475 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246486 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246503 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246516 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246527 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246540 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246553 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246565 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246575 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246586 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246599 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246611 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246624 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246638 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246649 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246661 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246672 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246684 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246696 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246708 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246720 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246731 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246741 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246754 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246766 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246777 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246788 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246799 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249882 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249922 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249940 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249955 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249970 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249983 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249999 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250015 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250028 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250042 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250054 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250069 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250082 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250095 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250109 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250121 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250133 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250146 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250160 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250174 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250186 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250197 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250211 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250225 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250238 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250251 4740 reconstruct.go:97] "Volume reconstruction finished" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250261 4740 reconciler.go:26] "Reconciler: start to sync state" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250782 4740 manager.go:324] Recovery completed Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.259523 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262757 4740 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262770 4740 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262794 4740 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.278097 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.278998 4740 policy_none.go:49] "None policy: Start" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279840 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279895 4740 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279926 4740 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.279983 4740 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.280302 4740 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.280331 4740 state_mem.go:35] "Initializing new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.283090 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.283261 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.320494 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331905 4740 manager.go:334] "Starting Device Plugin manager" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331978 4740 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331994 4740 server.go:79] "Starting device plugin registration server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332542 4740 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332565 4740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332783 4740 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332887 4740 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332897 4740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.345837 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.380536 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.380746 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382317 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382524 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382560 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384310 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384460 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386336 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386500 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386555 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387501 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387658 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388533 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.423119 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.433208 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434681 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.435344 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452714 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452805 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555540 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555599 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555717 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555846 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555861 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555793 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556159 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.636069 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637690 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.638178 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.705992 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.719571 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.735963 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.751468 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.756982 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.820290 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84 WatchSource:0}: Error finding container 1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84: Status 404 returned error can't find the container with id 1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84 Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.824538 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.825070 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e WatchSource:0}: Error finding container e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e: Status 404 returned error can't find the container with id e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.830448 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa WatchSource:0}: Error finding container 9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa: Status 404 returned error can't find the container with id 9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.833001 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c WatchSource:0}: Error finding container 5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c: Status 404 returned error can't find the container with id 5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.837971 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a WatchSource:0}: Error finding container f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a: Status 404 returned error can't find the container with id f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.034669 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.034785 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.038386 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040447 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.040792 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.127993 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.128123 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.218997 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:58:18.48138327 +0000 UTC Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.219577 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.285379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.286897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.289464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.291090 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.293566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c"} Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.625720 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.660350 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.660441 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.702528 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.702594 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.841053 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842982 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.843354 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.159226 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.161959 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.219300 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.219377 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:58:29.436750885 +0000 UTC Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299004 4740 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299111 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299103 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302452 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302483 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.303999 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.304070 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.304187 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306629 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306707 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310106 4740 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310230 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.316214 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.607138 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894bb31254a9cd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,LastTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:52:55 crc kubenswrapper[4740]: W0216 12:52:55.798453 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.798557 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.106234 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.106365 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.219530 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:58:01.351815677 +0000 UTC Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.219597 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.227240 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315763 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315914 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.320487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.320597 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324096 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712" exitCode=0 Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324314 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330727 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330825 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.333036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.333137 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.363418 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.363521 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.444127 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445752 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.446305 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.731807 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.731945 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.219982 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:18:14.431585163 +0000 UTC Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.220728 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339020 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f" exitCode=0 Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f"} Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339137 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.344967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd"} Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345073 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345132 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345202 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345510 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346472 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347284 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.221023 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:43:08.741594689 +0000 UTC Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9c0eeeb27377d61443f7754bfac1381f13b4f3a82ba264f61d1f9e1f226ec6d"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9c8099bb5eba996bc3d8d2e863bd70633bd9b0254c3fe5821fc4793cf046d45"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fad9616d37e41997011d3984ba488307ed05ea1256b99562f50f2536d76cec56"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.355348 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.359960 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" exitCode=255 Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360160 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360318 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.362181 4740 scope.go:117] "RemoveContainer" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.691593 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.691944 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.221553 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:44:42.06735439 +0000 UTC Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.329123 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.371766 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.372613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84842791f89c497895c2a953a0e71d29b46aa338838efc39995fc2b0ab32ca89"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.372730 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ec7459bbcca61588e290cb35a3f34e0554be0a8ecdb013266b263c0c23ec9cf"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.377097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.380871 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.381054 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.381296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.647203 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649725 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.222405 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:58:26.892153149 +0000 UTC Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.305094 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.305440 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383387 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383459 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384754 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384836 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.036243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.223296 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:44:00.181084665 +0000 UTC Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.269609 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.386308 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.692435 4740 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.692529 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.224384 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:38:14.571024021 +0000 UTC Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.389223 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.020232 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.020513 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.224482 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:55:00.665471754 +0000 UTC Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.258775 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.259024 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: E0216 12:53:03.346231 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.225258 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:40:39.501111705 +0000 UTC Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.931655 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.931972 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.938004 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.225698 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:13:42.063688877 +0000 UTC Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.398059 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.405048 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.226318 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:19:31.642197315 +0000 UTC Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.400597 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.401976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.402015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.402027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:07 crc kubenswrapper[4740]: I0216 12:53:07.226888 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:58:45.150159586 +0000 UTC Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.115587 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.115969 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.123193 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.123265 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.227183 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:08:54.702629141 +0000 UTC Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227280 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:12:32.739362457 +0000 UTC Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227393 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227646 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.228966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.229024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.229039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.252117 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.410040 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.411513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.411769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.412024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.431296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.227539 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:16:13.757906387 +0000 UTC Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.412974 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414422 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042326 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042500 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042944 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042996 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.048027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.228300 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:42:25.795001859 +0000 UTC Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.415404 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.416392 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.416479 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.692775 4740 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.692884 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 12:53:12 crc kubenswrapper[4740]: I0216 12:53:12.228796 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:45:34.910159235 +0000 UTC Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.110168 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.110365 4740 trace.go:236] Trace[1864731534]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:00.072) (total time: 13038ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1864731534]: ---"Objects listed" error: 13038ms (12:53:13.110) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1864731534]: [13.038249127s] [13.038249127s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.110760 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.112080 4740 trace.go:236] Trace[1901863196]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.577) (total time: 11534ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1901863196]: ---"Objects listed" error: 11534ms (12:53:13.111) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1901863196]: [11.534229283s] [11.534229283s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.115162 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.115796 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119355 4740 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119511 4740 trace.go:236] Trace[157355179]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.565) (total time: 11553ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[157355179]: ---"Objects listed" error: 11553ms (12:53:13.119) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[157355179]: [11.553705717s] [11.553705717s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119546 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.120524 4740 trace.go:236] Trace[1549146929]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.864) (total time: 11256ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1549146929]: ---"Objects listed" error: 11256ms (12:53:13.120) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1549146929]: [11.256262812s] [11.256262812s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.120843 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.125618 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.146749 4740 csr.go:261] certificate signing request csr-85kdb is approved, waiting to be issued Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.155647 4740 csr.go:257] certificate signing request csr-85kdb is issued Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.215615 4740 apiserver.go:52] "Watching apiserver" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.219173 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.219570 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220126 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220568 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.220737 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220806 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.221245 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.221589 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.221597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.221645 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227113 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227188 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227755 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227861 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227385 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.228003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227461 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227517 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.229200 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:54:21.547026452 +0000 UTC Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.255783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.271106 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.286876 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.290914 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ttqrb"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.292157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.295584 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.295777 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.297949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.305047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.314730 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320426 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320525 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320585 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.321837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.321883 4740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322430 4740 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.326135 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.328538 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.331629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.339301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347216 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347253 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347272 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347368 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.84733182 +0000 UTC m=+21.223680541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.350422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355785 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355844 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355867 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355949 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.855924487 +0000 UTC m=+21.232273218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.360093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.364464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.385491 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.416710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421136 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421194 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421220 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421321 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421379 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421402 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421421 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421566 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421764 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421826 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421851 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421930 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422038 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422064 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422088 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422148 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422166 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422222 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422245 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422297 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422548 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422608 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422642 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422677 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422756 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422828 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422860 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422904 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422925 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423540 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423709 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423732 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423750 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423766 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423822 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423859 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423962 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424042 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424079 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424099 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424118 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424168 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424187 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424205 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424224 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424257 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424277 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424350 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424366 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424383 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424399 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424415 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424495 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424518 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424538 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424556 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424576 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424593 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424611 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424627 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424644 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424664 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424689 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424788 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424805 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424890 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424912 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424931 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424946 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424964 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425011 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425047 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425065 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425163 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425191 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425208 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425224 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425256 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425389 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425446 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425479 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425526 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425565 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425680 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425758 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425783 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425871 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425888 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426028 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426095 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426118 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426189 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426209 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426231 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426247 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426269 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426284 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426319 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426352 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426384 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426505 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426521 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426558 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426590 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426624 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426729 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426784 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427063 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427106 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427123 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427140 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427166 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427184 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427212 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427239 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427284 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427328 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427643 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427910 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427942 4740 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427959 4740 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427972 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427987 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427999 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422087 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422358 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422408 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423357 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423405 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423585 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423935 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424003 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429113 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424656 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425306 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427713 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427933 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427978 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.428118 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428498 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428874 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428903 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429399 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429615 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430206 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430350 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.431216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.434576 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.434596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435023 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435168 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435346 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436094 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436720 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437021 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437672 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437885 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.438084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.438139 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.439799 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.439928 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440626 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440846 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.441546 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.441556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.442139 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.442243 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448349 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448549 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448918 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.449628 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450149 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450640 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450867 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452544 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452730 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452798 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453441 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453695 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453909 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454119 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454929 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455453 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458652 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455827 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.456526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.456897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457210 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457640 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457964 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458543 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458802 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458870 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459040 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459333 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459339 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459475 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459559 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459570 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.959554542 +0000 UTC m=+21.335903263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459677 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.959800974 +0000 UTC m=+21.336149695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460019 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460166 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462079 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462360 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462649 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462685 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462726 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462805 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462919 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.463662 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.464519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.464606 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465214 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465335 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465646 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465908 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465794 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466575 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467959 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468721 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.468013 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.967973167 +0000 UTC m=+21.344321888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468085 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468482 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468256 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468728 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468264 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.469204 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.469540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480090 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.481757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.482703 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.482990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.483314 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.484016 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.484702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.485064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486316 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486477 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486504 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486480 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486781 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486889 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487518 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487765 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487849 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488180 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488538 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488602 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488609 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488747 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.489186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.494761 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.495068 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.498457 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.498507 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.499058 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500166 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500472 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.507491 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.508795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.518972 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.519381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.528356 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529396 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529546 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529572 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529588 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529616 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529630 4740 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529644 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529658 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529673 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529686 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529701 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529728 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529758 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529770 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529780 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529791 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529804 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529861 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529875 4740 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529888 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529901 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529914 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529942 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529956 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529969 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529982 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529994 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530008 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530025 4740 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530040 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530055 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530069 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530081 4740 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530096 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530109 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530122 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530147 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530159 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530174 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530187 4740 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530200 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530212 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530225 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530238 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530251 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530266 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530282 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530294 4740 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530307 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530320 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530335 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530349 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530370 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530382 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530398 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530415 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530428 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530441 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530454 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530467 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530479 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530491 4740 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530506 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530517 4740 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530529 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530541 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530554 4740 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530566 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530579 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530592 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530607 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530621 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530653 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530668 4740 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530695 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530709 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530721 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530734 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530747 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530760 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530774 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530787 4740 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530800 4740 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530841 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530857 4740 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530871 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530885 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530898 4740 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530911 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530924 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530938 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530950 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530963 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530976 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530990 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531004 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531021 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531035 4740 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531049 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531064 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531078 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531091 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531104 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531117 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531131 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531144 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531159 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531173 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531187 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531200 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531212 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531222 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531232 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531242 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531255 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531267 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531284 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531298 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531310 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531321 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531331 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531342 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531354 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531363 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531373 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531382 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531391 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531402 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531411 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531420 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531430 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531440 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531450 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531461 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531472 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531483 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531493 4740 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531539 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531552 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531563 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531572 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531591 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531601 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531611 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531624 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531642 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531655 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531666 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531678 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531692 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531706 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531717 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531728 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532296 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532312 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532350 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532363 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532373 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532382 4740 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532392 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532407 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532431 4740 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532441 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532450 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532460 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532469 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532479 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532504 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532513 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532523 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532532 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532543 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532552 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532564 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532596 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532609 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532623 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532638 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532669 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.538325 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.540708 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.548699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.549918 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.549882 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.555634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.556224 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.563439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.576738 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: W0216 12:53:13.582000 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d WatchSource:0}: Error finding container ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d: Status 404 returned error can't find the container with id ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.591723 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.603625 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.609319 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.614430 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.625702 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.633349 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.647308 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q4qtj"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.647690 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650099 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650275 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650300 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.653965 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.654127 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.665994 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.677547 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.693784 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.704560 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.705385 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.706025 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.713138 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.717136 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.725849 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734717 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734843 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734860 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734875 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734887 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.735051 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.754165 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.768653 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835749 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.836763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.836833 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.842482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.851932 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.936325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.936442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936663 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936686 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936700 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936737 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936762 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:14.93674135 +0000 UTC m=+22.313090071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936772 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936791 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936918 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:14.936893722 +0000 UTC m=+22.313242633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.966661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.022210 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-v88dn"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.022583 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.023716 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mcb2z"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.024101 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.028887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029287 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029360 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029506 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029510 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029619 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034601 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034551 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034885 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035126 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035096 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.037650 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.037985 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.037954504 +0000 UTC m=+22.414303245 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.038303 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.039608 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.039546389 +0000 UTC m=+22.415895110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039688 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039865 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.039926 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039983 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.040003 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.039988633 +0000 UTC m=+22.416337554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040056 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040100 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040125 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040358 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040452 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040658 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040803 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040854 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040913 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040990 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.041004 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.044039 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.051525 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.061889 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.071672 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.081378 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.092806 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.104945 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.125540 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.137841 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142208 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142314 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142361 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142375 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142425 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142533 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142549 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142686 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142768 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142823 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142865 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142880 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143270 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143314 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143754 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143949 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144008 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144026 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144049 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144145 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144436 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144646 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144660 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144655 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144724 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144750 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144771 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145280 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.146627 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.147885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.148422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.157134 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 12:48:13 +0000 UTC, rotation deadline is 2026-11-13 14:33:07.786932992 +0000 UTC Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.157235 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6481h39m53.629701523s for next certificate rotation Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.163496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.164201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.164791 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.167069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.175980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.190099 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.207886 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.222261 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.230886 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:25:35.297890451 +0000 UTC Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.255805 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.272669 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.328188 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.349030 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.354356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.359499 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.372859 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.379866 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: W0216 12:53:14.387839 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad2ec4df_11e9_4970_bd6b_c258ce2d08bb.slice/crio-27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e WatchSource:0}: Error finding container 27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e: Status 404 returned error can't find the container with id 27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.429399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"d779e04aa061f91765ed436953c618c472ea0d9a00a456e348d56b2e0782ee5c"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433767 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a7d5dfb5688b2b93feafa71e8584c4346d7001e888896dca26263e6bc549ad14"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435652 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"a3287e85de51a52db07ef29dacd4547a2a71069bb40cbf280a5504aae50e5ab1"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.436714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.437841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"eb66f3d2b37f21fe7aa111a136026ce7eb2cec2307821fe7b198f1e6beb272ce"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.439583 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerStarted","Data":"27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.441415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttqrb" event={"ID":"42324c80-0f4d-4a2b-8374-fa2358bc8217","Type":"ContainerStarted","Data":"880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.441440 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttqrb" event={"ID":"42324c80-0f4d-4a2b-8374-fa2358bc8217","Type":"ContainerStarted","Data":"60057fc64fdaa9b8f385ea9918f07d39f190e1f2d3fbd52ac19d30448d547e62"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.442607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.442634 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e6007e017e68653eda1596ccc2546c8eeadfba98c27d4b9bb8182ab6b2d544ca"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.444231 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.444544 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447328 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" exitCode=255 Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447394 4740 scope.go:117] "RemoveContainer" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.458695 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.481143 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.494177 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.505600 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.519086 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.531332 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.543090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.546349 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.546845 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.547084 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.556029 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.567237 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.597457 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.623880 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.662619 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.699993 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.740782 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.780904 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.825093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"message\\\":\\\"W0216 12:52:56.643264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 12:52:56.644080 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771246376 cert, and key in /tmp/serving-cert-2462593891/serving-signer.crt, /tmp/serving-cert-2462593891/serving-signer.key\\\\nI0216 12:52:57.084088 1 observer_polling.go:159] Starting file observer\\\\nW0216 12:52:57.086632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 12:52:57.086785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:52:57.088200 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2462593891/tls.crt::/tmp/serving-cert-2462593891/tls.key\\\\\\\"\\\\nF0216 12:52:57.304564 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.860787 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.902310 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.942698 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.950789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951136 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951210 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951174 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951234 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951269 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951287 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951312 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:16.951287466 +0000 UTC m=+24.327636377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.951209 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951453 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:16.951429418 +0000 UTC m=+24.327778209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.980411 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.029375 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052147 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052417 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052415 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052470 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.05245682 +0000 UTC m=+24.428805541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052515 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.05249084 +0000 UTC m=+24.428839721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052627 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.052614161 +0000 UTC m=+24.428963082 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.064314 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.231434 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:46:14.996936557 +0000 UTC Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.280953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281118 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.280972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.281327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281473 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.286119 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.286914 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.288068 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.288721 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.289689 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.290218 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.290888 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.291873 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.292641 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.293738 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.294350 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.295947 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.296490 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.297014 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.297907 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.298402 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.299369 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.299780 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.300322 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.301287 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.301742 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.302714 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.303181 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.304219 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.304666 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.305497 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.306695 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.307170 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.308402 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.309057 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.309961 4740 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.310061 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.311646 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.312648 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.313466 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.315206 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.316042 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.316992 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.317695 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.320016 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.320597 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.321618 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.322399 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.323446 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.323935 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.324798 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.325394 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.326796 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.327456 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.328471 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.328951 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.329989 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.330588 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.331055 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.452552 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb" exitCode=0 Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.452602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.454667 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187" exitCode=0 Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.454744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.456780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.459260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.463262 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.463466 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.471213 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.490084 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.520902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"message\\\":\\\"W0216 12:52:56.643264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 12:52:56.644080 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771246376 cert, and key in /tmp/serving-cert-2462593891/serving-signer.crt, /tmp/serving-cert-2462593891/serving-signer.key\\\\nI0216 12:52:57.084088 1 observer_polling.go:159] Starting file observer\\\\nW0216 12:52:57.086632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 12:52:57.086785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:52:57.088200 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2462593891/tls.crt::/tmp/serving-cert-2462593891/tls.key\\\\\\\"\\\\nF0216 12:52:57.304564 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.567186 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.598027 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.622693 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.644838 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.672276 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.692720 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.708363 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.725909 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.742316 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.764054 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.787388 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.800920 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.821306 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.848850 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.866132 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.889885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.905329 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.909783 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7zs65"] Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.910205 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.912747 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.924018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.930486 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.952074 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.961976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.962008 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.962058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.976270 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.002738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.023100 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062885 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.063086 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.063888 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.064903 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.094580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.122336 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.158649 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.201885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.232319 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:16:22.194416461 +0000 UTC Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.240158 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.248188 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: W0216 12:53:16.278689 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6020d2c6_e8f9_4ca7_b6c4_c219193a42e6.slice/crio-b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f WatchSource:0}: Error finding container b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f: Status 404 returned error can't find the container with id b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.282889 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.320394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.361247 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.398866 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.442936 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.468637 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.468710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476416 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476507 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.477695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7zs65" event={"ID":"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6","Type":"ContainerStarted","Data":"b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.479933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.480869 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.481156 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.482211 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.522645 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.560098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.603195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.641458 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.680306 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.721269 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.759999 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.800638 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.842085 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.885545 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.924351 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.968923 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.972145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.972237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972334 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972369 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972380 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972450 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972471 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972472 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:20.972520273 +0000 UTC m=+28.348869024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972570 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:20.972558025 +0000 UTC m=+28.348906786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.011009 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.057892 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073374 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073598 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073716 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073735 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.073685714 +0000 UTC m=+28.450034475 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073805 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.073780656 +0000 UTC m=+28.450129407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.074101 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.074171 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.074154808 +0000 UTC m=+28.450503569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.090575 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.129716 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.166440 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.232515 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:21:16.131042337 +0000 UTC Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.280968 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.281012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.280970 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281119 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281391 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.488906 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7" exitCode=0 Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.489039 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7"} Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.492155 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7zs65" event={"ID":"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6","Type":"ContainerStarted","Data":"13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e"} Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.506959 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.534232 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.549545 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.570481 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.582355 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.602131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.613220 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.634138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.646935 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.658170 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.670264 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.690713 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.703018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.720280 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.760326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.797257 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.844351 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.895156 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.931098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.960726 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.001607 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.039867 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.078905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.125383 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.160440 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.201101 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.233360 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:42:38.557611797 +0000 UTC Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.498428 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a"} Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.500662 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16" exitCode=0 Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.500729 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16"} Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.520895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.533875 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.553424 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.566928 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.581381 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.592136 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.604402 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.618101 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.628625 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.643178 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.654672 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.678985 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.695406 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.700432 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.722840 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.738083 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.783406 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.820397 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.863979 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.906991 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.939708 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.980592 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.020735 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.063331 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.103155 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.144526 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.187766 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.232799 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.233868 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:06:14.463356873 +0000 UTC Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.264905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280385 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280467 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280503 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.280666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.280796 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.281016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.509274 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96" exitCode=0 Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.509345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.516043 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.520356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.520451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.521152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.521639 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.534012 4740 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.534322 4740 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538714 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.539340 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.559279 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.563704 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566564 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.582947 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.587831 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588409 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588424 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.607732 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.608551 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612881 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.628164 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.631285 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635356 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.645008 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.652384 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.652665 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656682 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.665523 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.679120 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.700001 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.716473 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.743409 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759771 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.781558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.819865 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.858431 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965310 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068767 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172259 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.235146 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:38:21.962462766 +0000 UTC Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275512 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379687 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379746 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.483053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.483177 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.517753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.517969 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.518146 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.518299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.521124 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4" exitCode=0 Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.521189 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.540854 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.549965 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.551302 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.556285 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.574570 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586303 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.589932 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.602535 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.614945 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.627555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.641987 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.654756 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.671072 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.686899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689197 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.702535 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.715024 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.740626 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.755511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.772518 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.783090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792269 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.799318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.812660 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.825825 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.838899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.851890 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.868342 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.884212 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.902023 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.918204 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.939393 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.987394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.001965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002050 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.017786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.017883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018025 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018028 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018077 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018099 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018045 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018165 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018172 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.018145607 +0000 UTC m=+36.394494358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018219 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.018203648 +0000 UTC m=+36.394552539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105256 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105269 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.118658 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.118945 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.119457 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119517193 +0000 UTC m=+36.495865954 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119649 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119610926 +0000 UTC m=+36.495959677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.119711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119926 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119989 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119975327 +0000 UTC m=+36.496324078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208779 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.237133 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:45:24.065587729 +0000 UTC Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281232 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281343 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281417 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281557 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312581 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312648 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518202 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.529094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerStarted","Data":"504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.544477 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.564584 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.576974 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.595469 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.614745 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.626468 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.641208 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.659237 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.670975 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.686731 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.710214 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724054 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724610 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.738087 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.752616 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.827702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.930735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931440 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.033909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.033992 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034044 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034062 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137916 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.237768 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:25:03.53814184 +0000 UTC Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245635 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349143 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452274 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556190 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761344 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761374 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864825 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967322 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.073648 4740 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171710 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.238120 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:11:12.750922948 +0000 UTC Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273704 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280487 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280596 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.280724 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.281056 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.281117 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.295248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.306685 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.319452 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.331804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.349650 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.362084 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.376089 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377331 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.391343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.405493 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.417689 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.446227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480628 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.499439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.512249 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.537197 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/0.log" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.539781 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" exitCode=1 Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.539847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.540523 4740 scope.go:117] "RemoveContainer" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.552894 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.565513 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.575844 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582784 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.587288 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.600955 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.612752 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.628149 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.638434 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.661911 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.677864 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685147 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.698544 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.716117 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.727619 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.743564 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787632 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787669 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.889708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890276 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096466 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199465 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.239297 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:56:04.139519246 +0000 UTC Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301958 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.302005 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403367 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505708 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.547806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.548633 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/0.log" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551699 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" exitCode=1 Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551752 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551869 4740 scope.go:117] "RemoveContainer" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.552759 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:24 crc kubenswrapper[4740]: E0216 12:53:24.553005 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.576861 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.600158 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.622183 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.658804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.673691 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.687308 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.699364 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712301 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.715404 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.728115 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.740699 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.756908 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.769714 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.783385 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.800922 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815205 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.020965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021078 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226585 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.239483 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:14:23.636942584 +0000 UTC Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.281139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.281320 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.281526 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.282013 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.282187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.282275 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329908 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433685 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.536904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537165 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.556648 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.641004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.641025 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744642 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054428 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157706 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.240382 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:41:05.091903085 +0000 UTC Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260711 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363449 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466927 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.557860 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn"] Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.558640 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.561083 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.561870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.586226 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.605673 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.639287 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.661374 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.672986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684188 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684332 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684636 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.711742 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.725897 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.744944 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.762318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775328 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.778736 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786536 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786652 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.787379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.787381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.792850 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.799459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.807410 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.811253 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.818773 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.836138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.855396 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877935 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.880991 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: W0216 12:53:26.901003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872ae2f5_5967_4ebe_b05f_148a0f7402f7.slice/crio-60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd WatchSource:0}: Error finding container 60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd: Status 404 returned error can't find the container with id 60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.980982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981110 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085947 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189529 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.241293 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:33:24.991130406 +0000 UTC Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281138 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281249 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281308 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281379 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291648 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393623 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496414 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.571562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.571940 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.572116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.585413 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599319 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.600445 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.614852 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.625711 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.642758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.658577 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.675189 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.690104 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701647 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.706300 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.706740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.706802 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.707174 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.724265 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.744534 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.758318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.779004 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.796921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.797000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.798391 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804869 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.809366 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.825227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.837057 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.849047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.860412 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.872756 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.887161 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898256 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.898341 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.898438 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:28.39842161 +0000 UTC m=+35.774770341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.913874 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.921628 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.929029 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.941527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.954718 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.969302 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.989104 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.002100 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009089 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.033608 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.050593 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111317 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213441 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.242018 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:51:25.362754336 +0000 UTC Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.402468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:28 crc kubenswrapper[4740]: E0216 12:53:28.402747 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:28 crc kubenswrapper[4740]: E0216 12:53:28.402902 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.40287008 +0000 UTC m=+36.779218831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.418995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521777 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726928 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829419 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.036018 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.112678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.112741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112911 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112929 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112939 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112991 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.112977749 +0000 UTC m=+52.489326470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113063 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113118 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113146 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113242 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.113209825 +0000 UTC m=+52.489558696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213679 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213712 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213672605 +0000 UTC m=+52.590021356 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213771 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213746327 +0000 UTC m=+52.590095078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213776 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213884 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213858619 +0000 UTC m=+52.590207530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241863 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.242413 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:09:23.723904216 +0000 UTC Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280530 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280580 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280584 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.280883 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281067 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281285 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281456 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.282017 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345870 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345894 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.416666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.416890 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.416978 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:31.416960763 +0000 UTC m=+38.793309504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448915 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551450 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.586010 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.588990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.589908 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.607853 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.625250 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.643315 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654248 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.662148 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.677839 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.695610 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.713161 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.729139 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.746384 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756704 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.767437 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.788189 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799402 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.803679 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.818132 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.818624 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.832377 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.837635 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.847191 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.854855 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857979 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.869250 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.876705 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.881001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.881010 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.893840 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.894002 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102368 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102421 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205419 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.243320 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:15:49.796028849 +0000 UTC Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308676 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.412021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.412039 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520852 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623619 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829948 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035184 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.244490 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:38:15.147473105 +0000 UTC Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280348 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280432 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280501 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280525 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280605 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280720 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280871 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.435719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.435959 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.436081 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:35.43605377 +0000 UTC m=+42.812402701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.447972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448077 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.552216 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655641 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655681 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758841 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861632 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861732 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.244875 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:28:24.717493896 +0000 UTC Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.272987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377146 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479878 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583097 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793804 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.898340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.898742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001779 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207867 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.245630 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:49:02.97468691 +0000 UTC Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.280899 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281047 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281142 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281400 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281479 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281509 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281555 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281604 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.298234 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310903 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.313969 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.340919 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.359348 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.378378 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.396895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.412964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413226 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413573 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.424895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.446197 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.461077 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.477665 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.489350 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.503696 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515337 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515418 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.533457 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.547207 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617940 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824507 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030243 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.235871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.235999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236058 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.246638 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:51:38.880122209 +0000 UTC Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.340124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.340293 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546838 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649984 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753127 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062103 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165136 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.247607 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:19:14.66054104 +0000 UTC Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268567 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.280731 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.280986 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.281050 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.281230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371990 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.479905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.480085 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.480202 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:43.480169029 +0000 UTC m=+50.856517790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578183 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578265 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681710 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784333 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887432 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990297 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093311 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.248488 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:47:47.645406077 +0000 UTC Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300204 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300298 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402882 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506575 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609841 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.713001 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816768 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022914 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.126068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.126124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229273 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.248675 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:34:35.322978762 +0000 UTC Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281185 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281266 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281303 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281451 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281601 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281836 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331906 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537439 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.638978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639046 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742146 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845703 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949173 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052728 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.249867 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:38:19.860712453 +0000 UTC Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.259942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260010 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260114 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363445 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467486 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675243 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675318 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.786168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889276 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992298 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095634 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.250057 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:34:53.081519998 +0000 UTC Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281043 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281115 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281432 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281448 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281585 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281685 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281730 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.406009 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.508943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509092 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.611982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612096 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715695 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.922645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923520 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026481 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026635 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129989 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.130010 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180652 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.200496 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205736 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.227870 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234207 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.250559 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:57:26.951318932 +0000 UTC Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.254957 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260128 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.261939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.280275 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.281367 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.311877 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.311945 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.312283 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317870 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.340956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.354925 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.376836 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.391998 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.406837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.419551 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421533 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.437970 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.459060 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.474459 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.503743 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.522438 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.539049 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.550130 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.561951 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.575077 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626409 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.627192 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.630512 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.630856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.644710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.665036 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.685458 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.702198 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.720113 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.736445 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.760634 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.776349 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.794369 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.805749 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.816473 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.827848 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.830968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831084 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.838308 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.848423 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.859978 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.872131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933595 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036189 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138756 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245538 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.250846 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:35:24.557149349 +0000 UTC Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281155 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281262 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281458 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281587 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281735 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349588 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.454016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.454036 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558236 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.637535 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.638458 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642756 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" exitCode=1 Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642860 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642946 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.644169 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.644523 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661506 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.665389 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.679702 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.692427 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.710837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.727851 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.748221 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.763275 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.778886 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.802141 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.818757 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.839671 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.859047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868640 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.874414 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.901731 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.915490 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.926780 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.970978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074149 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177609 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.251053 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:08:08.637867404 +0000 UTC Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384362 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487841 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591847 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.656677 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.663785 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:42 crc kubenswrapper[4740]: E0216 12:53:42.664074 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.683863 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694765 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.700879 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.721479 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.736516 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.753804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.769760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.787950 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797766 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.802439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.816285 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.828729 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.845004 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.864629 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.881913 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.898683 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.899973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900054 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.915705 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.938128 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002562 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105785 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209155 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.251487 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:00:02.017860521 +0000 UTC Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280696 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280786 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280924 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281689 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281802 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281898 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.299278 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.311219 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312376 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.331738 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.347425 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.363885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.377456 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.390278 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.407382 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416120 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.422917 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.440416 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.453586 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.465875 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.480317 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.495262 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.511792 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518773 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.525391 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.573198 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.573473 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.573642 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:59.573601311 +0000 UTC m=+66.949950202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621119 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723459 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723558 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723572 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827716 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.930746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931462 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035251 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.138941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139051 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242503 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242580 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.252002 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:59:17.281423474 +0000 UTC Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345672 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.449753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553773 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657686 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.762020 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079968 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183743 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.192637 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.192796 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193002 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193028 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193050 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193098 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193055 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193180 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.193153949 +0000 UTC m=+84.569502710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193183 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193279 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.193260273 +0000 UTC m=+84.569609034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.253176 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:12:17.749613307 +0000 UTC Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.280795 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.281016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282031 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282364 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287443 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.293951 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294157 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294120194 +0000 UTC m=+84.670468915 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.294298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.294360 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294463 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294575 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294601 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294560136 +0000 UTC m=+84.670909057 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294658 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294648668 +0000 UTC m=+84.670997389 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391289 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.494976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495100 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598199 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.701931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702621 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806286 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.856401 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.868277 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.877056 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.889333 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.906703 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.921986 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.939686 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.960248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.978762 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.997529 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010996 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.011011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.011024 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.012587 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.033995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.052851 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.081262 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.097148 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.118088 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.136792 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.142584 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.151550 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.167576 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.181039 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.194140 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.204773 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.217070 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.219935 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.238733 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.250899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.253505 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:24:59.501924729 +0000 UTC Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.266362 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.281252 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.300795 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.317754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.333326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.347450 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.380138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.399093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.413789 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421194 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.428478 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523171 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626380 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626459 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729776 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832987 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935572 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935594 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039214 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039270 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142860 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.246003 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.254345 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:37:13.079287836 +0000 UTC Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280862 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281014 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281233 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.281382 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281554 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.349906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453718 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.761643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.761896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762175 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.071932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.071997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072061 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.254685 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:23:17.212854883 +0000 UTC Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.278004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.380668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.380990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381250 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.484957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485073 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588457 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792756 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895869 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998654 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.101934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.101997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204696 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.255166 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:52:47.267252694 +0000 UTC Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280647 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280719 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280666 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280839 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280959 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280991 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.281051 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410912 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513447 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719532 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822368 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925503 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925514 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028291 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131427 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234236 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.255704 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:07:21.128819355 +0000 UTC Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441919 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.550016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.550080 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653580 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682348 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.699310 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704433 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.719230 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723782 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.746089 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751930 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.763669 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.781701 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.781889 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783734 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885919 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988278 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090485 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193610 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.256263 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:32:46.293495405 +0000 UTC Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280532 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280758 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280972 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.281170 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501442 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811200 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.914968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915664 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915803 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.018956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123557 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226893 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.256975 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:21:01.54957382 +0000 UTC Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329939 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329956 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.434111 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.537306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.537767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.744968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745098 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.849013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056203 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159190 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.257547 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:05:40.80437878 +0000 UTC Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.261769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262895 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.282417 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.282539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.282594 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.282644 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.284765 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.284999 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.285205 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.285444 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.301268 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.317513 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.334110 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.349027 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365709 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.367476 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.384956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.396723 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.409558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.423352 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.435389 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.449994 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.462978 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.471971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472067 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.477460 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.493021 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.520703 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.541719 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.557200 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575575 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885990 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988846 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988961 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091694 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.194979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195068 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.259520 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:15:25.23839821 +0000 UTC Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297464 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506563 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.610013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713952 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920708 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.023993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024098 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127588 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231425 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.260125 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:36:52.516908699 +0000 UTC Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280792 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280836 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281307 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.281572 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281581 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281717 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281742 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334597 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438221 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.540926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541457 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645375 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749570 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853531 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956202 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956237 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059574 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162135 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.260507 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:11:13.241819746 +0000 UTC Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367355 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469166 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.573186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.573398 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.779684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780472 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.884316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.884353 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987636 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.090963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091646 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194181 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.261501 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:05:38.66361818 +0000 UTC Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.281026 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.281049 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281223 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281319 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281403 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.399917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.399985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400038 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506825 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608697 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817496 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228388 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.264474 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:43:14.598330917 +0000 UTC Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330460 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433152 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638746 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740799 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842486 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047884 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253251 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.265111 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:31:49.411002243 +0000 UTC Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280514 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.280677 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.280971 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.281076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.281139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.281227 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355485 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.662130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.662507 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.662626 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:54:31.662601661 +0000 UTC m=+99.038950412 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.869014 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971300 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073856 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176911 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.266142 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:23:55.642912272 +0000 UTC Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383235 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.489769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490552 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697289 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.801874 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805996 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.820232 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.823956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824066 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.835261 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838650 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.851111 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854399 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.865205 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.865374 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866975 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973856 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973924 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076340 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178521 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.267872 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:09:37.493701857 +0000 UTC Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280443 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.280566 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.280751 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280771 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.281434 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.281525 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383532 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.485971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486035 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692740 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727244 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727341 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" exitCode=1 Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.728181 4740 scope.go:117] "RemoveContainer" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.749980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.767160 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.781640 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.793714 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794977 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.810518 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.822533 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.841218 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.853900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.866870 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.877785 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.888292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897330 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.899774 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.910857 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.920223 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.932960 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.945751 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.961318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001466 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104438 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207395 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.268216 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:21:42.279492843 +0000 UTC Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309827 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412979 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515153 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618128 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.737200 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.737268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.752343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.761338 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.770654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.781758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.796783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.807730 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.817515 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823122 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.830521 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.841260 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.856346 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.874258 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.885639 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.895396 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.912684 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926367 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.927423 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.947913 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.958826 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130400 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.269240 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:41:14.302600968 +0000 UTC Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280235 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280322 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280464 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280547 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280609 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280868 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280932 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.295650 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.305055 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.325649 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335874 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.338374 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.353552 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.372953 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.385527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.397526 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.407900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.418255 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.426914 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438268 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.443573 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.453113 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.467283 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.481339 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.495305 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.506982 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643973 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746568 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951256 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951268 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053416 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155383 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.270353 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:08:09.242498106 +0000 UTC Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361705 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.464001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.464011 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566885 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669618 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771592 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.875004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.875018 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977455 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080152 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183137 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.271356 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:25:31.98894869 +0000 UTC Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.280967 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.280998 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.281025 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.281048 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281082 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281158 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285658 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489792 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900398 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003260 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207630 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.272169 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:43:58.784028153 +0000 UTC Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412211 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514624 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617494 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.719974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720087 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823306 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926633 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131823 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234399 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.272873 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:43:36.138443695 +0000 UTC Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280373 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280628 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280707 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280879 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280941 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.281181 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.281251 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338562 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441175 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646378 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956699 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956750 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.265008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.265022 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.273963 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:40:44.586972497 +0000 UTC Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.281667 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370991 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.576915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679329 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.760926 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.764676 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.765270 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.781343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782324 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.800907 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.820905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.837164 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.859980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.874772 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884476 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.887943 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.899465 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.913880 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.941318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.955367 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.965980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.977585 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.986963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987047 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.990682 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.001768 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.012365 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.029502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089827 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192421 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192452 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.274486 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:37:19.396511715 +0000 UTC Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280835 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280888 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280888 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.280963 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281094 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281166 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281585 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294572 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294581 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397149 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602426 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705627 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.773235 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.774541 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.780860 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" exitCode=1 Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.780968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.781067 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.784265 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.785528 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.811902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.833979 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.856632 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.877047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912778 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.913160 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.935201 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.950782 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.961739 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.972577 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.981919 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.992234 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.003149 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015477 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.018573 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.041827 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.053502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.064328 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.081279 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118548 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.275390 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:05:38.488798867 +0000 UTC Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426938 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529443 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633370 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736546 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.790965 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.794890 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:10 crc kubenswrapper[4740]: E0216 12:54:10.795189 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.807597 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.821098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.835901 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841800 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.849256 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.863417 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.876584 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.890687 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.904053 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.923527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.932415 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.943710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944944 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.955902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.971490 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.983082 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004997 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.009575 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.018309 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022303 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.033365 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.034262 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.050629 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055639 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.069965 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.087117 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.087273 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088462 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.276549 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:11:45.736454643 +0000 UTC Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.280970 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281032 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281450 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281622 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281727 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295778 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.298195 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398470 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398483 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501632 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123379 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226673 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.277367 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:40:26.665128929 +0000 UTC Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.329785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433315 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537255 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640375 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743193 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743202 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845701 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.052949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053090 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155773 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155814 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.258926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259055 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.277770 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:12:03.057101645 +0000 UTC Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280247 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280257 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280337 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280527 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280549 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280641 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280720 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.293608 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.305883 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.319836 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.331655 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.343671 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360508 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.362135 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.377727 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.393760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.414126 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.428415 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.439884 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.457639 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462667 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.469080 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.478662 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071403d5-fba4-44ab-a7f4-639b19b7dfe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.493062 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.506018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.517681 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.528648 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565322 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668680 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873965 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976571 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.079391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080173 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.183948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.278648 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:27:31.237334716 +0000 UTC Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390284 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492985 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594706 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594745 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699575 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699608 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113295 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.219977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220126 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.279712 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:21:25.404567996 +0000 UTC Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281443 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281592 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281745 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281780 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.282059 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426699 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633381 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633487 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736667 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839492 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941625 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044853 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044865 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147762 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.251015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.251035 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.280900 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:17:09.714914109 +0000 UTC Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353705 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.456026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.456045 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558810 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661437 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763361 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.865592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.968993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072881 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.176058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.176118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.260177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.260661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.260600 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261109 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261210 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.260813 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261316 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261327 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261371 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.261356414 +0000 UTC m=+148.637705135 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261498 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.261484618 +0000 UTC m=+148.637833349 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280362 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.280889 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.280701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280981 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:03:52.396661662 +0000 UTC Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280400 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.281498 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.281608 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.295190 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.362679 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.362638707 +0000 UTC m=+148.738987458 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362947 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363034 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363138 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.363098401 +0000 UTC m=+148.739447172 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363157 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363226 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.363206144 +0000 UTC m=+148.739554905 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383326 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486441 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486458 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692902 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795802 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899527 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.107506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.107912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108417 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210813 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.281792 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:52:27.768675771 +0000 UTC Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313900 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416099 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726199 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829401 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931716 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034670 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137335 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240333 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281199 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281255 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281199 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281332 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281348 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281402 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281489 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.282204 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:44:29.284677875 +0000 UTC Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343802 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447628 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550932 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654334 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757391 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860714 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964355 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066803 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169948 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273473 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.282735 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:20:46.830230805 +0000 UTC Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.375993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376132 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479270 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582102 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.684990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685290 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890896 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142192 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.156787 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162145 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.176784 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.197350 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.201964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202067 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.224106 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228568 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.239649 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.239770 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280589 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280674 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280720 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.280953 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280990 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281123 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281228 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281284 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.282916 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:29:29.409245712 +0000 UTC Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446753 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.651998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754652 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960986 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064786 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167569 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.270007 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.283561 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:00:12.395796084 +0000 UTC Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372431 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.475005 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577651 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783338 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988988 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.092018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.092037 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194950 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281045 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281113 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281215 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281384 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281408 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281452 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281571 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281672 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.284694 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:22:36.977366539 +0000 UTC Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.296394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297394 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.310984 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.323694 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.339555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.356014 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.379502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.396648 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.407917 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.418451 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.431837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.442467 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.453472 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071403d5-fba4-44ab-a7f4-639b19b7dfe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.468315 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.481791 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.495326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502695 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.520754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.542098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"541c3fc8-2570-4c03-87b4-65f25ff06131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9c8099bb5eba996bc3d8d2e863bd70633bd9b0254c3fe5821fc4793cf046d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c0eeeb27377d61443f7754bfac1381f13b4f3a82ba264f61d1f9e1f226ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec7459bbcca61588e290cb35a3f34e0554be0a8ecdb013266b263c0c23ec9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84842791f89c497895c2a953a0e71d29b46aa338838efc39995fc2b0ab32ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad9616d37e41997011d3984ba488307ed05ea1256b99562f50f2536d76cec56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.558780 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.574215 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605664 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.707926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.707999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708079 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.810896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811063 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017194 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223452 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.286010 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:01:18.786006306 +0000 UTC Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428847 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.531761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738909 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840760 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944436 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944458 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944506 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047791 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047911 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253198 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280726 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280771 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.280922 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280987 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.281060 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.282048 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.282352 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.282533 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.288002 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:34:07.444797284 +0000 UTC Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.288130 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.288311 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460908 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668729 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771367 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771761 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874657 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978290 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081291 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184264 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.287931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.287986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288039 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288188 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:24:00.989278516 +0000 UTC Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391759 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495229 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598844 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701670 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804757 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908719 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011308 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216597 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281339 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281475 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281654 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281783 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.288733 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:00:31.744707598 +0000 UTC Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319471 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.421982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.523946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.523981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524036 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626758 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832301 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935871 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.140921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141985 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.289216 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:33:10.068429409 +0000 UTC Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.349787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350103 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452432 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555537 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.659122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.659389 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865954 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968529 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071771 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277127 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277278 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.280658 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.280828 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.281076 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281049 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.281189 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281354 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281537 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281716 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.290552 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:48:47.861583045 +0000 UTC Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381141 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484218 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484303 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.586520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.586928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587406 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690259 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793345 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896239 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.999013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.999023 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101964 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204579 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204595 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.290935 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:26:12.282909712 +0000 UTC Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307660 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410321 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513372 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615706 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615900 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719204 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719245 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821835 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925211 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028556 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.131988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.234943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280313 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280445 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280316 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280575 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280694 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280758 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.291306 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:07:32.456685316 +0000 UTC Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.440518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.440885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441309 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544156 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.621243 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.641964 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.666211 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.670941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671078 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.689162 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.714453 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.714615 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716864 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.737758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.737927 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.737995 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:55:35.737978388 +0000 UTC m=+163.114327119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819672 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922425 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024570 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127640 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.291611 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:11:44.764488761 +0000 UTC Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436063 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436142 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.538935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.538994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539025 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539040 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641669 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745523 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849234 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952561 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055223 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158977 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261822 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261833 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280135 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280276 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280303 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280563 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280917 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280940 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.281130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.292564 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:16:19.610496554 +0000 UTC Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.306758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.324195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.340574 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364764 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.403628 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.403599733 podStartE2EDuration="1m19.403599733s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.38516144 +0000 UTC m=+100.761510231" watchObservedRunningTime="2026-02-16 12:54:33.403599733 +0000 UTC m=+100.779948494" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.455617 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v88dn" podStartSLOduration=80.455589934 podStartE2EDuration="1m20.455589934s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.454174283 +0000 UTC m=+100.830523044" watchObservedRunningTime="2026-02-16 12:54:33.455589934 +0000 UTC m=+100.831938675" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467337 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.500744 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podStartSLOduration=80.500721564 podStartE2EDuration="1m20.500721564s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.468494035 +0000 UTC m=+100.844842776" watchObservedRunningTime="2026-02-16 12:54:33.500721564 +0000 UTC m=+100.877070305" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.531902 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.531877042 podStartE2EDuration="16.531877042s" podCreationTimestamp="2026-02-16 12:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.530483951 +0000 UTC m=+100.906832672" watchObservedRunningTime="2026-02-16 12:54:33.531877042 +0000 UTC m=+100.908225783" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.550379 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.550352556 podStartE2EDuration="48.550352556s" podCreationTimestamp="2026-02-16 12:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.549832161 +0000 UTC m=+100.926180902" watchObservedRunningTime="2026-02-16 12:54:33.550352556 +0000 UTC m=+100.926701317" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571845 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.585572 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.585522922 podStartE2EDuration="22.585522922s" podCreationTimestamp="2026-02-16 12:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.568741768 +0000 UTC m=+100.945090499" watchObservedRunningTime="2026-02-16 12:54:33.585522922 +0000 UTC m=+100.961871643" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.644793 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.644774198 podStartE2EDuration="1m15.644774198s" podCreationTimestamp="2026-02-16 12:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.643741997 +0000 UTC m=+101.020090738" watchObservedRunningTime="2026-02-16 12:54:33.644774198 +0000 UTC m=+101.021122919" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.645717 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" podStartSLOduration=79.645706805 podStartE2EDuration="1m19.645706805s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.629015054 +0000 UTC m=+101.005363775" watchObservedRunningTime="2026-02-16 12:54:33.645706805 +0000 UTC m=+101.022055526" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674858 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777302 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.984720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.986012 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088959 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.191988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192126 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.293030 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:16:38.622071528 +0000 UTC Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.295028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.295102 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397553 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.499605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.499918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500176 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808755 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912381 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912423 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118411 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220920 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281074 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281214 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281455 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281534 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281765 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.282012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.282125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.293212 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:07:44.026109747 +0000 UTC Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426428 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631929 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.735981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736054 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736127 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840966 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151156 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.294346 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:58:25.980995695 +0000 UTC Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356507 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459959 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562768 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666870 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769867 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873504 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.975931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976075 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078885 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181383 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280573 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280611 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280681 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280762 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.280953 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281129 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281298 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281548 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.294902 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:59:31.100000999 +0000 UTC Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387493 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.490864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.490988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491890 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594754 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594903 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697772 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800227 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902509 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004420 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107263 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210447 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.281386 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:38 crc kubenswrapper[4740]: E0216 12:54:38.281543 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.295040 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:35:44.335709445 +0000 UTC Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312882 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.518966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621712 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723850 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826758 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238685 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280513 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.280727 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280922 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.280847 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.281054 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.281278 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.295987 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:33:23.201423742 +0000 UTC Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341838 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445434 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549216 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856664 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856674 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959693 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062787 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166136 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166246 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.296333 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:27:06.994486571 +0000 UTC Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372992 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.373022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.373048 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476884 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579371 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785721 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.996753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101336 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204446 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.281152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282030 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282176 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282265 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282432 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282580 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282649 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.297269 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:59:16.36824344 +0000 UTC Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.514987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515147 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617560 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823963 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.897248 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ttqrb" podStartSLOduration=88.897228853 podStartE2EDuration="1m28.897228853s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.654515894 +0000 UTC m=+101.030864615" watchObservedRunningTime="2026-02-16 12:54:41.897228853 +0000 UTC m=+109.273577574" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.898178 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5"] Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.898661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900549 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900744 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900889 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.901406 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.916303 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" podStartSLOduration=88.916272294 podStartE2EDuration="1m28.916272294s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:41.915962035 +0000 UTC m=+109.292310806" watchObservedRunningTime="2026-02-16 12:54:41.916272294 +0000 UTC m=+109.292621055" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.931876 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7zs65" podStartSLOduration=88.931798072 podStartE2EDuration="1m28.931798072s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:41.931303017 +0000 UTC m=+109.307651748" watchObservedRunningTime="2026-02-16 12:54:41.931798072 +0000 UTC m=+109.308146823" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949075 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949223 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949296 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949422 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.060517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.075774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.215016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.297468 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:32:35.968072135 +0000 UTC Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.297792 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.310132 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.905149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" event={"ID":"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36","Type":"ContainerStarted","Data":"498fbaf5c4a72f772692c7e299cfcef7ee483f4ba623b254c2333f1ef05560a6"} Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.905239 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" event={"ID":"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36","Type":"ContainerStarted","Data":"8205a873042a570f5ceca69c064fbf3f4fb331d981ac467560a634d8a9de7340"} Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.926959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" podStartSLOduration=89.926938376 podStartE2EDuration="1m29.926938376s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:42.926684118 +0000 UTC m=+110.303032839" watchObservedRunningTime="2026-02-16 12:54:42.926938376 +0000 UTC m=+110.303287107" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.280972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285370 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285955 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.286304 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.280744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.281411 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281533 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281620 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281685 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280903 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281053 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281153 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281249 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281360 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.925610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926519 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926606 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" exitCode=1 Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926666 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb"} Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926801 4740 scope.go:117] "RemoveContainer" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.927515 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.929291 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 12:54:48 crc kubenswrapper[4740]: I0216 12:54:48.931133 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280712 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.280858 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.280915 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.281097 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.281242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281179 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281490 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281595 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281794 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281931 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.282301 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.282406 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.281276 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.944047 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.946851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d"} Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.947411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.972412 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podStartSLOduration=99.972393729 podStartE2EDuration="1m39.972393729s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:52.971712689 +0000 UTC m=+120.348061400" watchObservedRunningTime="2026-02-16 12:54:52.972393729 +0000 UTC m=+120.348742450" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.157133 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.157268 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.157372 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.270500 4740 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.280278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281164 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.281228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.281286 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281386 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281475 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.398326 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:54:54 crc kubenswrapper[4740]: I0216 12:54:54.280552 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:54 crc kubenswrapper[4740]: E0216 12:54:54.280760 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.280356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.280362 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.280579 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.280699 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.282542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.282753 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:56 crc kubenswrapper[4740]: I0216 12:54:56.280357 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:56 crc kubenswrapper[4740]: E0216 12:54:56.280615 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280623 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282362 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282603 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282527 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:58 crc kubenswrapper[4740]: I0216 12:54:58.280822 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:58 crc kubenswrapper[4740]: E0216 12:54:58.280958 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:58 crc kubenswrapper[4740]: E0216 12:54:58.400119 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280391 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.280947 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280584 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.281986 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280444 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.282258 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:00 crc kubenswrapper[4740]: I0216 12:55:00.281197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:00 crc kubenswrapper[4740]: E0216 12:55:00.281398 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280478 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280608 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280890 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280874 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:02 crc kubenswrapper[4740]: I0216 12:55:02.280735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:02 crc kubenswrapper[4740]: E0216 12:55:02.280912 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.281213 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.282514 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.283070 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.283112 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283334 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283468 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.401560 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.988417 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.988756 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c"} Feb 16 12:55:04 crc kubenswrapper[4740]: I0216 12:55:04.280983 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:04 crc kubenswrapper[4740]: E0216 12:55:04.281109 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.280948 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.280953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281550 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.281009 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281780 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281862 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:06 crc kubenswrapper[4740]: I0216 12:55:06.280156 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:06 crc kubenswrapper[4740]: E0216 12:55:06.280327 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280218 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280359 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280387 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280450 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280552 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280721 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:08 crc kubenswrapper[4740]: I0216 12:55:08.280978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:08 crc kubenswrapper[4740]: E0216 12:55:08.281247 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280461 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280520 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280640 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285475 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285799 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285908 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.280941 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.282336 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.282986 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.282313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.334357 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.335332 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-65j55"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.335895 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.336394 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.337240 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.341933 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.342925 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.343974 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345238 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345668 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.346138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.348974 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349077 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349146 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349013 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349217 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349291 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349463 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349756 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.358788 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.359260 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.359539 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360116 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360257 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360405 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.361100 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.361973 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.362604 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.364875 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.365337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.365949 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.366765 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.368070 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.368592 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.369095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.369226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.370108 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.370624 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.371991 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.372310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.372842 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.377864 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.379492 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380062 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380128 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380198 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380370 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380437 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380652 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381036 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381154 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381349 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381422 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381596 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381768 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382336 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382659 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382760 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382890 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383079 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383187 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383095 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.385186 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.386262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388255 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388848 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390950 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391218 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391641 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391998 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.402447 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.403127 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.403643 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404022 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404313 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392001 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392157 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404828 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.405430 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.406469 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.406979 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410603 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410742 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410993 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411166 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411293 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411569 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411980 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.412172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.413536 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.415891 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.416551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.418743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.419146 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.420332 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421009 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421750 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421930 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422219 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422395 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422569 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.425604 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.425843 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427247 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427388 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427520 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.426106 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429326 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429648 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429849 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.430544 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.437051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.505761 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.632839 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633557 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633717 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.635598 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636357 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636566 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636604 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636692 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636702 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637212 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637373 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637537 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637587 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637635 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637744 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638671 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638755 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.641112 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.641135 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643535 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644394 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-42rhd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644835 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645141 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645469 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645509 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646875 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646885 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647018 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647098 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647114 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647403 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647512 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647558 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.648504 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.648737 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649118 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649870 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649971 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.650036 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.650672 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651353 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651448 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.653958 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.653996 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654547 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.656395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658074 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658262 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658701 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.660252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663032 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663517 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663980 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.664037 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.664542 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.666088 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.666519 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.668919 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.670488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.671336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.671929 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.673022 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.675956 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.676870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.677095 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.678883 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.679075 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.680463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.682718 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.683610 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.684702 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.684911 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.685437 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.685848 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.686741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.686771 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.687394 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.687658 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.688293 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.688638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.689063 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.690174 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.690680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.691214 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.691992 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.692386 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.693263 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.693443 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.694142 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.694438 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.695466 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.696490 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.697768 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.698978 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.698985 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.700730 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.702151 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.703506 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.704922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.706196 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.707494 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.708702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.709855 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.710994 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.712132 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.713157 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.714886 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-njwjd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715677 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715792 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.716939 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.718193 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.718661 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.719288 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.720374 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.721501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.722860 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.724285 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.725656 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.727613 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.730262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.737623 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.739705 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.743037 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749045 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749097 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749280 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.750437 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.758923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.778640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.798823 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.818640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.839673 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.858758 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.879233 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.899919 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.918964 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.938855 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.959679 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.979328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.999854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.020910 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.062858 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.078362 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.104143 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.108715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.113917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.133065 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.154467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.159198 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.178632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.198871 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.229338 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.239431 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.279265 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.285859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.293377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:13 crc kubenswrapper[4740]: W0216 12:55:13.293681 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d07265_6abd_44a7_83c5_112c01083143.slice/crio-5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c WatchSource:0}: Error finding container 5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c: Status 404 returned error can't find the container with id 5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.310534 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.318872 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.320604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.331369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.339759 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.352201 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.358635 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.362606 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.380442 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.387845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.399533 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.420249 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.429660 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.440177 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:55:13 crc kubenswrapper[4740]: W0216 12:55:13.442844 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c88b213_e85e_4b8b_a9ee_f0f3224716ae.slice/crio-6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464 WatchSource:0}: Error finding container 6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464: Status 404 returned error can't find the container with id 6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464 Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.459389 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.482051 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.492929 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.500562 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.519745 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.542637 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.559486 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.578710 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.599104 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.639253 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.659215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663499 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663560 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663593 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663622 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663667 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663695 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663723 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663738 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663784 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663877 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663910 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663926 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663965 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663982 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663997 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664078 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664268 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664304 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664365 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664382 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.664398 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.164385895 +0000 UTC m=+141.540734616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664427 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664472 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664497 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.677934 4740 request.go:700] Waited for 1.010998814s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.678859 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.699146 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.719031 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.739360 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.759208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.765803 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.765932 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.26590605 +0000 UTC m=+141.642254771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.765985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766025 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766050 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766231 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766334 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766368 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766549 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766942 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767041 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767157 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767202 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767224 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767333 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767513 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767655 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768036 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768103 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768456 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768885 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.769050 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.269038038 +0000 UTC m=+141.645386759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.769105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768561 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.769330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770429 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770723 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770767 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770923 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771620 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771775 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773013 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774122 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775455 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775701 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775736 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.777233 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.794792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.795570 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.796428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.797224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.797701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.798076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.798718 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.799224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.822017 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.822668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823040 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823110 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823152 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823229 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823280 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823348 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823425 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823456 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823474 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823550 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823567 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.824545 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.824935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.825357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.826389 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.827325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.828086 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.828301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.830210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.830581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.832261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833007 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833432 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.834193 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.834298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.835149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.838749 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.839120 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.847687 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.860337 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.861309 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.862198 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.880138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.899510 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.919666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.924575 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.424551251 +0000 UTC m=+141.800899972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924716 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924753 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924843 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.925286 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.425270954 +0000 UTC m=+141.801619675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925500 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925685 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926209 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926421 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926625 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927341 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927498 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927582 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927834 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927794 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928067 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928159 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928664 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929762 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930023 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930143 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930219 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930243 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930328 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931821 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932452 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932617 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.933248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934355 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.935258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.936318 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.936877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.938834 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.938998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.941571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.941647 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.942327 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.947460 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949152 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.950608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.960544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.979458 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.990665 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.999082 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.027159 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.034040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m9529" event={"ID":"91631c8c-d18f-44d6-9919-0b5fe8e8d45b","Type":"ContainerStarted","Data":"61bc7a8f1f7d81fd976e58331fef837ba11654d5e85e6a6d26c678a0bcdd6892"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.034801 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.035073 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.535027198 +0000 UTC m=+141.911376069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.035138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.038036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c86mj" event={"ID":"493225bc-7119-4eec-9314-aa63e475d061","Type":"ContainerStarted","Data":"e946c237141087ec0151e70d67bf7411d255f2e97363c60380d7d87b7356a18d"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.038925 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerStarted","Data":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerStarted","Data":"29e6c5dab661956c91b79a723fe07411f83f7e5c787f55a2531731add29989ac"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040420 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042182 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" event={"ID":"3c88b213-e85e-4b8b-a9ee-f0f3224716ae","Type":"ContainerStarted","Data":"9d258440943667a765c8762344a0a70eb2706cbb4ab1c832f0315e738691b4e0"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" event={"ID":"3c88b213-e85e-4b8b-a9ee-f0f3224716ae","Type":"ContainerStarted","Data":"6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042897 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tdlx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042944 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.044367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" event={"ID":"83add687-ddae-4960-8e05-c81bc891b8f0","Type":"ContainerStarted","Data":"bf7d7d675b0fc61dcb8cb818b6eedaeae4c546f6fe5e90ade90d97e0ee3a191b"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"97d0463c95281f8c84d643e76732c752bf8d1645784068dd5914f0030e554046"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046698 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"d9c8dd9c05d58eac5cb9a434a182b3d9269de7b80a871500be8f0f696a0d7d18"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046717 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"a32fdf0c567cc8ab716d31a72f0c0b5840dfab8daea6115187ad0343a8bd7fcc"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050416 4740 generic.go:334] "Generic (PLEG): container finished" podID="24d07265-6abd-44a7-83c5-112c01083143" containerID="5c2b5cf5627228da9bde0c8638cd742898b9304d143fe5621f94cf73a243585c" exitCode=0 Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerDied","Data":"5c2b5cf5627228da9bde0c8638cd742898b9304d143fe5621f94cf73a243585c"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerStarted","Data":"5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.058523 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.070999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.071153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.079388 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.090239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.098905 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.118988 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.130199 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.136154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.137244 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.637190462 +0000 UTC m=+142.013539183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.139535 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.149490 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.159788 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.180299 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.198666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.219544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.225742 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.238854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.238980 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.239129 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.73910898 +0000 UTC m=+142.115457691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.239827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.240161 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.740147502 +0000 UTC m=+142.116496313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.250169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.259775 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.278753 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.299597 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.305961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.319341 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.340542 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.340704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.340789 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.840767558 +0000 UTC m=+142.217116279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.342131 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.342985 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.842961138 +0000 UTC m=+142.219309889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.350948 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.358384 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.378754 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.399715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.419529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.439314 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.443302 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.443453 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.94343293 +0000 UTC m=+142.319781651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.443777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.444106 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.94409807 +0000 UTC m=+142.320446791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.460864 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.478949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.498721 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.510577 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.518539 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.538431 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.543604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.544739 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.544906 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.044872091 +0000 UTC m=+142.421220812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.544975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.545321 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.045284714 +0000 UTC m=+142.421633455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.545520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.559544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.578564 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.633865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.642733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.646130 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.646272 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.146251221 +0000 UTC m=+142.522599942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.646433 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.646752 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.146743797 +0000 UTC m=+142.523092508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.652958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.672588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.713024 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.721623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.736697 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.744592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.747477 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.747611 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.24758858 +0000 UTC m=+142.623937301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.748041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.748376 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.248362665 +0000 UTC m=+142.624711386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.753079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.773078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.788281 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.795986 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.796149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.803272 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.815076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.819893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.836782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.837437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.888752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.889365 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.889923 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.389903498 +0000 UTC m=+142.766252219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.890017 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.901164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.915371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.916102 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.920743 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.926524 4740 request.go:700] Waited for 1.000525883s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.926845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.945425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.945540 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.972482 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.976566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.978176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.992193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.992565 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.492553419 +0000 UTC m=+142.868902140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:14.999731 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.011363 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.012226 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.021851 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.047346 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.057140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m9529" event={"ID":"91631c8c-d18f-44d6-9919-0b5fe8e8d45b","Type":"ContainerStarted","Data":"f63c862e8b008a3fc4588c15a9f797e9951ebed9f127723332e86bbc95d3ac25"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.059049 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060555 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060651 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060943 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.070145 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.075467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.075696 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.078208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.078925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.079423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c86mj" event={"ID":"493225bc-7119-4eec-9314-aa63e475d061","Type":"ContainerStarted","Data":"da731b7a5bd768dea2648d59618949502bbbfdc567d9ff781bf5f89b61460664"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.080206 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102409 4740 patch_prober.go:28] interesting pod/console-operator-58897d9998-c86mj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102517 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podUID="493225bc-7119-4eec-9314-aa63e475d061" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.102967 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.602948152 +0000 UTC m=+142.979296873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.104332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.104540 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.604523812 +0000 UTC m=+142.980872533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.107043 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" event={"ID":"83add687-ddae-4960-8e05-c81bc891b8f0","Type":"ContainerStarted","Data":"9e7b4d8a5c91c86b676c069a6d02d853991d87d4312d1824c0c242276afab1b8"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.108373 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.113393 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.124681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerStarted","Data":"9d079a0caaf0498fc86a38d66c9cb00cea06bc8fc9b341cf38e0c1f762903f42"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.128548 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tdlx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.128602 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.129009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.146901 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.148613 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.154779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.177635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.206438 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.207336 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.707305476 +0000 UTC m=+143.083654267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.213705 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.232654 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.232752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.233847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.238603 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.240328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.242713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.248157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.253903 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3c2258_4f58_414c_a893_c721b5ac9c03.slice/crio-fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447 WatchSource:0}: Error finding container fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447: Status 404 returned error can't find the container with id fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.267551 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.279644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.280601 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.281324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.290022 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb14491a_6043_446a_8b10_626838253345.slice/crio-c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7 WatchSource:0}: Error finding container c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7: Status 404 returned error can't find the container with id c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.296853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.298522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.313970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.316310 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.816291716 +0000 UTC m=+143.192640437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.324108 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.329464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.336211 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.343468 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.344003 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.348065 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.351900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.357063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.378321 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.383552 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.404143 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.415734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.415893 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.915870429 +0000 UTC m=+143.292219150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.416553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.417079 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.917066037 +0000 UTC m=+143.293414758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.422554 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.425132 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.491510 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda31d7595_0ee7_48b5_9f1f_19907ed7c92b.slice/crio-f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd WatchSource:0}: Error finding container f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd: Status 404 returned error can't find the container with id f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.517697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.518117 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.018100246 +0000 UTC m=+143.394448967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.525442 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.537307 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.555734 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.577354 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.577414 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.579332 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.589837 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.619922 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.620342 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.120322712 +0000 UTC m=+143.496671433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.634218 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.723419 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.724299 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.729313 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.729963 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.229942532 +0000 UTC m=+143.606291263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.744466 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980ab133_4d29_4d9e_b359_bf3cb06fbba3.slice/crio-cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422 WatchSource:0}: Error finding container cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422: Status 404 returned error can't find the container with id cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.764403 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.777679 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.796181 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.831553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.831938 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.331924261 +0000 UTC m=+143.708272982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.898620 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.928397 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.933755 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.934865 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.43483558 +0000 UTC m=+143.811184301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.015750 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" podStartSLOduration=123.015730045 podStartE2EDuration="2m3.015730045s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.014415044 +0000 UTC m=+143.390763765" watchObservedRunningTime="2026-02-16 12:55:16.015730045 +0000 UTC m=+143.392078776" Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.028064 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eef055f_7504_4f20_817e_afcd1bb6f996.slice/crio-0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf WatchSource:0}: Error finding container 0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf: Status 404 returned error can't find the container with id 0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.037564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.038521 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.538501442 +0000 UTC m=+143.914850163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.040552 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92938f98_5bd3_49e2_be2d_65b0fd5d0c12.slice/crio-180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd WatchSource:0}: Error finding container 180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd: Status 404 returned error can't find the container with id 180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.117736 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" podStartSLOduration=123.117715555 podStartE2EDuration="2m3.117715555s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.063463457 +0000 UTC m=+143.439812168" watchObservedRunningTime="2026-02-16 12:55:16.117715555 +0000 UTC m=+143.494064276" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.136784 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.138299 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.139480 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.639462079 +0000 UTC m=+144.015810800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.178588 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.189888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.241329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.242680 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.742661876 +0000 UTC m=+144.119010597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.258340 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" event={"ID":"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca","Type":"ContainerStarted","Data":"bf0e44a27587bc7fb33843c6462760b3b44d3c507f8b13805f5414a1f648babe"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.266597 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerStarted","Data":"ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.266656 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerStarted","Data":"fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.268299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.269962 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wrjdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.270016 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.277175 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.286232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" event={"ID":"6f465ee4-90ff-4746-a90f-1e964b6c4d05","Type":"ContainerStarted","Data":"766f4abd04d39551a6838130123deb458c86dbe4df8e515db1ffed9b714c645d"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.301984 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.302147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"3fa93212d09678929fe979c3b81c37a4dea5659d5b87e6ecb28d6c5abf3f21af"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.314311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" event={"ID":"74217d18-e17c-469b-a492-49b62f2f96c9","Type":"ContainerStarted","Data":"7d6e773965cc226dfbf4e70cb2da1ffd37af20003c9cd7fd5b91cacdc47ec476"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.315805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" event={"ID":"980ab133-4d29-4d9e-b359-bf3cb06fbba3","Type":"ContainerStarted","Data":"cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.316651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" event={"ID":"d5f0e5d1-897e-4200-8ea7-716faf71db56","Type":"ContainerStarted","Data":"6bc48c70d3f7ed96c98294a637f1bf5dd4a1152f3c164513dddb10226ae850ad"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.325524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" event={"ID":"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc","Type":"ContainerStarted","Data":"32061023c32e4372677caf448a149aad546020eecb225c7e2dc4ef293d6c3732"} Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.334321 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3a4715_2249_418d_b03e_bd5aac43089e.slice/crio-c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab WatchSource:0}: Error finding container c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab: Status 404 returned error can't find the container with id c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.334371 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-42rhd" event={"ID":"91338fe1-147f-41ff-9816-8cdcb7d1a08b","Type":"ContainerStarted","Data":"8c57b07157a8e862e051683f72f8f5e2c2dc05d88b4b049dfe9b8e55427d031c"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.344501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.346369 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.846346159 +0000 UTC m=+144.222694880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.347453 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.349574 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.353557 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.356787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerStarted","Data":"531ee6088e028abeb40db4014fff58f47925cdba0b3674ddf9755268d1aa83d4"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.374856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerStarted","Data":"b1fdab80b8055470789558626b94e6fd689f065930bcfe2c60fd34eb94175732"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.387960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" event={"ID":"2eef055f-7504-4f20-817e-afcd1bb6f996","Type":"ContainerStarted","Data":"0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.396296 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njwjd" event={"ID":"d24bd6df-1e79-4e8b-a71a-c3f07422af23","Type":"ContainerStarted","Data":"ea721113a767da5df98377a8b7b1c9131ab6acd95e443433dc8ddf40c1ddcff7"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.399679 4740 csr.go:261] certificate signing request csr-hzktw is approved, waiting to be issued Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.407337 4740 csr.go:257] certificate signing request csr-hzktw is issued Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.408265 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.414347 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerStarted","Data":"f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.445342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"5b08a917ae64f2aebd3511c9f67742e608bba4dfa7092c9189fe790a79fbf887"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.446355 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"f3be1ce6f3d9c7ed9896d86c3574154347e66b8924db78345f6a89c465870131"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.447795 4740 patch_prober.go:28] interesting pod/console-operator-58897d9998-c86mj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.447891 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podUID="493225bc-7119-4eec-9314-aa63e475d061" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.448760 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.448845 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.450683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.451063 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.951051394 +0000 UTC m=+144.327400105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.494991 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.557467 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.558100 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.058073511 +0000 UTC m=+144.434422222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.558228 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.561337 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.061305403 +0000 UTC m=+144.437654154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.656501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.661512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.661670 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.16163349 +0000 UTC m=+144.537982211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.661759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.662082 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.162070414 +0000 UTC m=+144.538419135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.788561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.790406 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.29037622 +0000 UTC m=+144.666724941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.799483 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-m9529" podStartSLOduration=123.799466357 podStartE2EDuration="2m3.799466357s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.798751634 +0000 UTC m=+144.175100375" watchObservedRunningTime="2026-02-16 12:55:16.799466357 +0000 UTC m=+144.175815078" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.852645 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.882331 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.901384 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podStartSLOduration=123.901270511 podStartE2EDuration="2m3.901270511s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.893327211 +0000 UTC m=+144.269675932" watchObservedRunningTime="2026-02-16 12:55:16.901270511 +0000 UTC m=+144.277619232" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.901664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.902296 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.402278542 +0000 UTC m=+144.778627253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.911079 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.922758 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.976163 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2b33f9_514f_48f7_ae7f_23bb3ea0fab5.slice/crio-36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210 WatchSource:0}: Error finding container 36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210: Status 404 returned error can't find the container with id 36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210 Feb 16 12:55:17 crc kubenswrapper[4740]: W0216 12:55:17.002212 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4e839d_cd94_49e9_a386_e90820fceb5c.slice/crio-b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a WatchSource:0}: Error finding container b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a: Status 404 returned error can't find the container with id b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.003035 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.003313 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.503299441 +0000 UTC m=+144.879648162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.030209 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" podStartSLOduration=124.030190088 podStartE2EDuration="2m4.030190088s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.029716283 +0000 UTC m=+144.406065014" watchObservedRunningTime="2026-02-16 12:55:17.030190088 +0000 UTC m=+144.406538809" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.034308 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.042522 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.083640 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podStartSLOduration=124.083621179 podStartE2EDuration="2m4.083621179s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.078025923 +0000 UTC m=+144.454374644" watchObservedRunningTime="2026-02-16 12:55:17.083621179 +0000 UTC m=+144.459969900" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.104181 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.104578 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.604559017 +0000 UTC m=+144.980907738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.106958 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.206507 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.207362 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.707332901 +0000 UTC m=+145.083681692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.317714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.318196 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.81817909 +0000 UTC m=+145.194527821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.408901 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 12:50:16 +0000 UTC, rotation deadline is 2026-11-13 02:58:08.916460789 +0000 UTC Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.408947 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6470h2m51.507515503s for next certificate rotation Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.419455 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.419781 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.919764017 +0000 UTC m=+145.296112738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.422993 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podStartSLOduration=123.422974827 podStartE2EDuration="2m3.422974827s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.38874197 +0000 UTC m=+144.765090691" watchObservedRunningTime="2026-02-16 12:55:17.422974827 +0000 UTC m=+144.799323548" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.480546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerStarted","Data":"ecc39d12cb6ac857f193b234c0c65095915f019fb5a183124161212d668749a6"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.491560 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" event={"ID":"b2f50997-a877-4d3f-9cf7-df6d254b48f5","Type":"ContainerStarted","Data":"cb4ef268814cfefbb0b70b06d1836ad6b9484454163f78d47f7245335c7a53ca"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.516948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" event={"ID":"6f465ee4-90ff-4746-a90f-1e964b6c4d05","Type":"ContainerStarted","Data":"f9f5b54a45b720edc8c5ca94b67395f345595fed5ad18ec157c5d42a1b3e65c5"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.517892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.521728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.526629 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" event={"ID":"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05","Type":"ContainerStarted","Data":"73810e418d9cf6f8bb87b8f398e00f1d17c609cfccb31bb5b8db9e59d5e96872"} Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.527019 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.02697442 +0000 UTC m=+145.403323141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.534544 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.534618 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.541692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" event={"ID":"456feb2b-91a3-42ae-aa03-accd55804c79","Type":"ContainerStarted","Data":"6942960f514cd783c283b57928f166c7de162a4cc005908788a8ee2b12e3ec0a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.550995 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podStartSLOduration=124.550974936 podStartE2EDuration="2m4.550974936s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.550835011 +0000 UTC m=+144.927183732" watchObservedRunningTime="2026-02-16 12:55:17.550974936 +0000 UTC m=+144.927323657" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.551093 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" podStartSLOduration=123.551087769 podStartE2EDuration="2m3.551087769s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.41607418 +0000 UTC m=+144.792422901" watchObservedRunningTime="2026-02-16 12:55:17.551087769 +0000 UTC m=+144.927436490" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.554879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njwjd" event={"ID":"d24bd6df-1e79-4e8b-a71a-c3f07422af23","Type":"ContainerStarted","Data":"3aa996d0fd9eb2ec7be7e9cb35d869392afb480c606c3ee19bdabfb96cb7526a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.570064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"28ef1c4b8d37926ac8c9404b2dae03f9587cde7c7fb460d7b8a838368374d4ff"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.577291 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-njwjd" podStartSLOduration=5.577278013 podStartE2EDuration="5.577278013s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.576122737 +0000 UTC m=+144.952471458" watchObservedRunningTime="2026-02-16 12:55:17.577278013 +0000 UTC m=+144.953626734" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.578520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerStarted","Data":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.591619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.598358 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" event={"ID":"980ab133-4d29-4d9e-b359-bf3cb06fbba3","Type":"ContainerStarted","Data":"f6c34729ed10f79e25b77bea57a32569be8e5e2fcd7e619d50b2f8e4ba1408cf"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.623848 4740 generic.go:334] "Generic (PLEG): container finished" podID="fb14491a-6043-446a-8b10-626838253345" containerID="c6e65c2ba2debad5f6a920c907574927f8de52209e2bd01b05839d0731d3b342" exitCode=0 Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.623929 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerDied","Data":"c6e65c2ba2debad5f6a920c907574927f8de52209e2bd01b05839d0731d3b342"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.627147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fhjt" event={"ID":"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5","Type":"ContainerStarted","Data":"36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.629785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerStarted","Data":"fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.630203 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.637230 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.637998 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.137980113 +0000 UTC m=+145.514328834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.645616 4740 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wknn7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.645679 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.656717 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" podStartSLOduration=124.656698572 podStartE2EDuration="2m4.656698572s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.656490446 +0000 UTC m=+145.032839177" watchObservedRunningTime="2026-02-16 12:55:17.656698572 +0000 UTC m=+145.033047293" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.660290 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gctsd" podStartSLOduration=124.660275105 podStartE2EDuration="2m4.660275105s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.609749715 +0000 UTC m=+144.986098446" watchObservedRunningTime="2026-02-16 12:55:17.660275105 +0000 UTC m=+145.036623846" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.684581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" event={"ID":"74217d18-e17c-469b-a492-49b62f2f96c9","Type":"ContainerStarted","Data":"8d3f6be83326cb04efddae2a209cc0b4c7ea1232724a6b3473d0c6a5bf6bbacb"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.686003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" event={"ID":"a7393aab-0211-49f3-b683-3cf11cae93c6","Type":"ContainerStarted","Data":"769f1c896eee2f74da4cb78d42864c5f8c6eb5b19197b846ba5885094e3969a9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.694235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"100d57dc7ecd48fee78fab77cd32f563620ce9818c4928895fcf1804af8c43c7"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.706901 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podStartSLOduration=124.706884231 podStartE2EDuration="2m4.706884231s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.687478551 +0000 UTC m=+145.063827272" watchObservedRunningTime="2026-02-16 12:55:17.706884231 +0000 UTC m=+145.083232952" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.712480 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" event={"ID":"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc","Type":"ContainerStarted","Data":"b2bb3d6dc9008d4ab841be4341d24cf81e469f03626b812f600039e4181e2b18"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.715563 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" event={"ID":"d2b43cb6-05b8-4834-b187-1377370007fd","Type":"ContainerStarted","Data":"b41252e4259e9f761e0c3f7095d52fcec89ac59940ff32d24b9efc59b9a7ba67"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.717744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"cfcbdb0ab4462d08f780e84f9512575a935e405f5c083bc85a6432c96f6148f9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.720057 4740 generic.go:334] "Generic (PLEG): container finished" podID="a31d7595-0ee7-48b5-9f1f-19907ed7c92b" containerID="d52bd0521c30dceb0141fcf325544230c0b46a352b31c82060df558ce08f1b73" exitCode=0 Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.720189 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerDied","Data":"d52bd0521c30dceb0141fcf325544230c0b46a352b31c82060df558ce08f1b73"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.733472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"68a11b47359f227cff050b9d48012ed1cc90c8a0cad7c3648191cfff5c56bad8"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.738842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.745175 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.245159045 +0000 UTC m=+145.621507856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.745324 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerStarted","Data":"065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.749484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"44043d9a540cbbc81a62d66d948f30b96c27ced178ccc7286417a0c9b3be3ac9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.751496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"2580205133e84b6dfdbdc6e20eb3dd15d33dce11e68b539ed9dc6b95cafc2946"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.756076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.760336 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" event={"ID":"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca","Type":"ContainerStarted","Data":"cee82cebfd010a21105291ca298b4bb9794db404715ef60484f7ca1333eccef6"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.763171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"2be139f079fcc5649706872bbbbd65a294d2ecbb09b125ad1cf822b03bf022a4"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.767780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"84c079b8365359cc583de9fa06c17cb129640acde6ffce26f6304b0ce3abc59b"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"87eccfacfa87633e077da2cdf279089b724f7a765c9e6627c4f0c23a551ed4ed"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774622 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774661 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.778703 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wrjdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.778775 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.785832 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.788514 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" podStartSLOduration=123.78850293 podStartE2EDuration="2m3.78850293s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.786481296 +0000 UTC m=+145.162830017" watchObservedRunningTime="2026-02-16 12:55:17.78850293 +0000 UTC m=+145.164851651" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.841756 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" podStartSLOduration=124.841717064 podStartE2EDuration="2m4.841717064s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.837277834 +0000 UTC m=+145.213626565" watchObservedRunningTime="2026-02-16 12:55:17.841717064 +0000 UTC m=+145.218065785" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.849063 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.853611 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.353559677 +0000 UTC m=+145.729908588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.865003 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" podStartSLOduration=124.864983057 podStartE2EDuration="2m4.864983057s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.86190737 +0000 UTC m=+145.238256111" watchObservedRunningTime="2026-02-16 12:55:17.864983057 +0000 UTC m=+145.241331778" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.953322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.953825 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.453787511 +0000 UTC m=+145.830136222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.055029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.055265 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.555226393 +0000 UTC m=+145.931575114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.055391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.055735 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.555721698 +0000 UTC m=+145.932070419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.109564 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.109619 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.115942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.156275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.156471 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.656441917 +0000 UTC m=+146.032790628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.156667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.157051 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.657035286 +0000 UTC m=+146.033384007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.207594 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.257473 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.258126 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.758109807 +0000 UTC m=+146.134458528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.589139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.589473 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.089459482 +0000 UTC m=+146.465808203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.689769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.690088 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.190067648 +0000 UTC m=+146.566416369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.790731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.791268 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.291249092 +0000 UTC m=+146.667597893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.841227 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" event={"ID":"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc","Type":"ContainerStarted","Data":"d8fb40d38a679c14d8c7a0d1c7f7d5145f7d8310cc80aba4b42885fea852ceaf"} Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.859136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"3dfb524b29fc7fff49015d3059e0907a6380cf021a014abd770928395271829e"} Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.895747 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.896951 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.396932898 +0000 UTC m=+146.773281619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.900704 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" podStartSLOduration=125.900680106 podStartE2EDuration="2m5.900680106s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:18.894902985 +0000 UTC m=+146.271251706" watchObservedRunningTime="2026-02-16 12:55:18.900680106 +0000 UTC m=+146.277028827" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.920710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" event={"ID":"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05","Type":"ContainerStarted","Data":"be068403ca2985a0086f52006a6f6ae30b0ce04827de4f4d9b576d48df977318"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.014117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.014466 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.514448176 +0000 UTC m=+146.890796897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.067462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-42rhd" event={"ID":"91338fe1-147f-41ff-9816-8cdcb7d1a08b","Type":"ContainerStarted","Data":"51cbd64af26c6cd3b28733e82546044f0bb307d276befb74ce466be5112e64cc"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.179059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.179635 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.679602004 +0000 UTC m=+147.055950775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.193467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" event={"ID":"d5f0e5d1-897e-4200-8ea7-716faf71db56","Type":"ContainerStarted","Data":"b8bf5a7d868dd4c7b60b1cecd7bdc8eae541071a50ff7921610cae0fcb85aa38"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.193683 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.218484 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-42rhd" podStartSLOduration=126.218461076 podStartE2EDuration="2m6.218461076s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.208192653 +0000 UTC m=+146.584541374" watchObservedRunningTime="2026-02-16 12:55:19.218461076 +0000 UTC m=+146.594809807" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.220953 4740 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5cgnk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.221000 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podUID="d5f0e5d1-897e-4200-8ea7-716faf71db56" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.225164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" event={"ID":"a7393aab-0211-49f3-b683-3cf11cae93c6","Type":"ContainerStarted","Data":"369f4df8a7332b82ba92f0d199f15846efbfce63af5960b3bd23043ea709ed0d"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.271996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"eb97e6818b8a59913c88c8610bf13abf0558b69530ee76868996e1d9ac11ad88"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.274249 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.274283 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.284207 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.284488 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podStartSLOduration=126.284474693 podStartE2EDuration="2m6.284474693s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.239933082 +0000 UTC m=+146.616281803" watchObservedRunningTime="2026-02-16 12:55:19.284474693 +0000 UTC m=+146.660823414" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.295850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.296651 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" podStartSLOduration=126.296631626 podStartE2EDuration="2m6.296631626s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.284404711 +0000 UTC m=+146.660753442" watchObservedRunningTime="2026-02-16 12:55:19.296631626 +0000 UTC m=+146.672980347" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.297487 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.797464122 +0000 UTC m=+147.173812843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.402973 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.403154 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.903118877 +0000 UTC m=+147.279467598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.403431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.403753 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.903739507 +0000 UTC m=+147.280088228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.504435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.504649 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.004622301 +0000 UTC m=+147.380971042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.504778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.505115 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.005108197 +0000 UTC m=+147.381456918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.556180 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.557893 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.557939 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.607463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.607636 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.107607382 +0000 UTC m=+147.483956113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.607824 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.608199 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.10819009 +0000 UTC m=+147.484538811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.708582 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.708783 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.208765485 +0000 UTC m=+147.585114206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.708901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.709207 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.209183408 +0000 UTC m=+147.585532129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.809849 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.810002 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.30997901 +0000 UTC m=+147.686327731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.810123 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.810530 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.310514637 +0000 UTC m=+147.686863358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.911830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.912035 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.41200064 +0000 UTC m=+147.788349361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.912235 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.912643 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.41262216 +0000 UTC m=+147.788970881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.013037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.013245 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.513224475 +0000 UTC m=+147.889573196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.013353 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.013746 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.513729341 +0000 UTC m=+147.890078062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.114653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.114834 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.614791192 +0000 UTC m=+147.991139913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.114936 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.115274 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.615265437 +0000 UTC m=+147.991614218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.216193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.216397 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.716375528 +0000 UTC m=+148.092724259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.276238 4740 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wknn7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.276296 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.278774 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"0770376af748569dc44c83607198ba831306735fa97014ab28fe87fe9f35ff8f"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.280607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" event={"ID":"2eef055f-7504-4f20-817e-afcd1bb6f996","Type":"ContainerStarted","Data":"5590859155e4a72f7b794ac007f2f81df6d73462563833e5fa89a2166df0d5ea"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.282692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"9f28ea3efe4f9888833c94d49676ec60f141bf112df88fcd72e429b982fb05a5"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.284443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" event={"ID":"d2b43cb6-05b8-4834-b187-1377370007fd","Type":"ContainerStarted","Data":"5d31c8fa3186311f29b5a5a2d93f4740823beba4efaf99b7c56d10e0e6aec73e"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.285712 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerStarted","Data":"907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.287165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"c256af9b499807e274a6149a715efcda6dd0650f10d5231582d5cdf2cc5f554a"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.288733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"c681901f30ae852bdbf12db537784e62b67ddfc527ee5933a9855cd7631ef4a6"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.289775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" event={"ID":"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc","Type":"ContainerStarted","Data":"cd3b3910d4d1170d56096d29ee738829002b152c4cfd46783e7362c2f80ca7f2"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.290827 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"f509a9cafab782b7f4d2f8571a59e797cdca574f195426eccfaf8781c9c46ad1"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fhjt" event={"ID":"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5","Type":"ContainerStarted","Data":"9a28ca22f31a0524bc58a608850510c8e85d2d1fd1c1bdaab7bf275de915fc85"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292853 4740 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5cgnk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292892 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podUID="d5f0e5d1-897e-4200-8ea7-716faf71db56" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292950 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.293005 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.314544 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" podStartSLOduration=127.314525937 podStartE2EDuration="2m7.314525937s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.302424786 +0000 UTC m=+147.678773547" watchObservedRunningTime="2026-02-16 12:55:20.314525937 +0000 UTC m=+147.690874668" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.318918 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.320835 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.820795373 +0000 UTC m=+148.197144215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.332612 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5fhjt" podStartSLOduration=8.332592335 podStartE2EDuration="8.332592335s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.315135775 +0000 UTC m=+147.691484496" watchObservedRunningTime="2026-02-16 12:55:20.332592335 +0000 UTC m=+147.708941056" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.350188 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podStartSLOduration=126.350166368 podStartE2EDuration="2m6.350166368s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.335029031 +0000 UTC m=+147.711377752" watchObservedRunningTime="2026-02-16 12:55:20.350166368 +0000 UTC m=+147.726515089" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.350461 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" podStartSLOduration=127.350457537 podStartE2EDuration="2m7.350457537s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.346647147 +0000 UTC m=+147.722995868" watchObservedRunningTime="2026-02-16 12:55:20.350457537 +0000 UTC m=+147.726806258" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.422996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.423231 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.923197836 +0000 UTC m=+148.299546567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.423339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.423672 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.923658291 +0000 UTC m=+148.300007022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.523989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.524197 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.024156883 +0000 UTC m=+148.400505604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.524592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.524907 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.024894406 +0000 UTC m=+148.401243127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.557530 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.557623 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.625463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.625616 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.125587984 +0000 UTC m=+148.501936715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.625710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.626095 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.12608325 +0000 UTC m=+148.502431991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.726169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.726503 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.226487639 +0000 UTC m=+148.602836360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.827617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.828248 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.32819939 +0000 UTC m=+148.704548151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.928901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.929208 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.429194148 +0000 UTC m=+148.805542869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.029718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.030165 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.530150905 +0000 UTC m=+148.906499646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.131097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.131293 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.631271467 +0000 UTC m=+149.007620188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.131391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.131731 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.631718582 +0000 UTC m=+149.008067313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.232741 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.233011 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.732981017 +0000 UTC m=+149.109329748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.297547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" event={"ID":"456feb2b-91a3-42ae-aa03-accd55804c79","Type":"ContainerStarted","Data":"d631035ed32d16c29ba6b60f63d2571696f4507a64bd2022685e105cba31324e"} Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.299175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerStarted","Data":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.299642 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.301624 4740 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n92bx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.301693 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podUID="bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.320383 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" podStartSLOduration=128.320365177 podStartE2EDuration="2m8.320365177s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:21.320232843 +0000 UTC m=+148.696581584" watchObservedRunningTime="2026-02-16 12:55:21.320365177 +0000 UTC m=+148.696713898" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.335167 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.835151523 +0000 UTC m=+149.211500244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.340447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.340517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.436549 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.936512523 +0000 UTC m=+149.312861244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.437050 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.937037859 +0000 UTC m=+149.313386580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.438678 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.442495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.537185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.537386 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.037360256 +0000 UTC m=+149.413708977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.537449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.537738 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.037726778 +0000 UTC m=+149.414075499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.557918 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.557983 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.601221 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.609650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.618234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.638668 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.638976 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.1388968 +0000 UTC m=+149.515245581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.639226 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.639616 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.139598413 +0000 UTC m=+149.515947134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.741214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.741578 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.241544051 +0000 UTC m=+149.617892772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.742046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.742441 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.242428189 +0000 UTC m=+149.618776910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.842679 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.842956 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.342916531 +0000 UTC m=+149.719265252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.843039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.843310 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.343298082 +0000 UTC m=+149.719646803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: W0216 12:55:21.892666 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a WatchSource:0}: Error finding container cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a: Status 404 returned error can't find the container with id cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.944151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.944373 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.444319291 +0000 UTC m=+149.820668012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.944503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.944863 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.444846448 +0000 UTC m=+149.821195179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.044881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.045280 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.545264008 +0000 UTC m=+149.921612719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.146012 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.146408 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.64639162 +0000 UTC m=+150.022740341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.247369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.248900 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.747953826 +0000 UTC m=+150.124302547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.307107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"ba16b7ac64d657254c6e5487def1cf620c63ec9030f4923398d92fdf686609b2"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.308394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3b394bd1c25334140785d24738c9ec1573944f38a755ba7184b1706ca94c7583"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.310503 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"f6a136832741c73fca43a66086c3e6e1bcc2dae9db6224dd52bba1ff367bb413"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.314801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"e6cbeb6553c8b7d149e2a67552985a59c4e9dfcf44d3f750ced02d45877f1995"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.325410 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"368b2d548f2abb6b56e7c310da90fc0e58dbfe8f69c244cfd0f7890620d5e9de"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.330437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"b03798c0c5efa6a3a7b1ecb3eeba2cb565c15f381a550123fb8ac114e2645b85"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.332247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.338626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"93c8024145c4e34e2dad8468d0651498513108aefbb9f3c19ebff516278ee557"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.340305 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" podStartSLOduration=129.340289032 podStartE2EDuration="2m9.340289032s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.329970017 +0000 UTC m=+149.706318758" watchObservedRunningTime="2026-02-16 12:55:22.340289032 +0000 UTC m=+149.716637753" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.344263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"598c6514a53180b8bda5b700535a63d070b723b5e95339dbb596c4cf3e4048f0"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.349139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.350366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" podStartSLOduration=129.350347118 podStartE2EDuration="2m9.350347118s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.347347924 +0000 UTC m=+149.723696645" watchObservedRunningTime="2026-02-16 12:55:22.350347118 +0000 UTC m=+149.726695839" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.350869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.850851864 +0000 UTC m=+150.227200635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.357423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerStarted","Data":"1364cab4bea3b40917f030c07855f94fd0c250ea5ac801484a545e6e2e82dd41"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.358191 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.360150 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" event={"ID":"b2f50997-a877-4d3f-9cf7-df6d254b48f5","Type":"ContainerStarted","Data":"90443c3caf5da9529e8f128ee1bc74a1d9926fa339f2d7cada2a5ce3a2212d7d"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.362887 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"1c165460650228d07ce223c82716ef0f846094a0efda8dd9a566d265e70b51fb"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.388964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"32c7495fbb3a8bc10cf5476cff3cb9407e5d2225803d07701acc111c5fea3dfd"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.390096 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" podStartSLOduration=129.390076318 podStartE2EDuration="2m9.390076318s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.373633331 +0000 UTC m=+149.749982052" watchObservedRunningTime="2026-02-16 12:55:22.390076318 +0000 UTC m=+149.766425039" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.392517 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" podStartSLOduration=129.392499394 podStartE2EDuration="2m9.392499394s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.388899882 +0000 UTC m=+149.765248613" watchObservedRunningTime="2026-02-16 12:55:22.392499394 +0000 UTC m=+149.768848115" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"a0b23a96a96a2bcc1b147595563e360326c0e4208e6606154b0e6327c9a84579"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411307 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" podStartSLOduration=129.411294347 podStartE2EDuration="2m9.411294347s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.411160572 +0000 UTC m=+149.787509303" watchObservedRunningTime="2026-02-16 12:55:22.411294347 +0000 UTC m=+149.787643068" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411509 4740 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n92bx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411552 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podUID="bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.437962 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" podStartSLOduration=129.437942965 podStartE2EDuration="2m9.437942965s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.43586635 +0000 UTC m=+149.812215071" watchObservedRunningTime="2026-02-16 12:55:22.437942965 +0000 UTC m=+149.814291696" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.450234 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.450542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.950514411 +0000 UTC m=+150.326863142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.450666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.452286 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.952276236 +0000 UTC m=+150.328624957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.461544 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" podStartSLOduration=129.461528837 podStartE2EDuration="2m9.461528837s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.459795352 +0000 UTC m=+149.836144063" watchObservedRunningTime="2026-02-16 12:55:22.461528837 +0000 UTC m=+149.837877558" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.484551 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" podStartSLOduration=129.4845316 podStartE2EDuration="2m9.4845316s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.482544188 +0000 UTC m=+149.858892909" watchObservedRunningTime="2026-02-16 12:55:22.4845316 +0000 UTC m=+149.860880321" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.551429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.551731 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.051717734 +0000 UTC m=+150.428066455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.560170 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.560236 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.591708 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" podStartSLOduration=128.591689763 podStartE2EDuration="2m8.591689763s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.591052592 +0000 UTC m=+149.967401323" watchObservedRunningTime="2026-02-16 12:55:22.591689763 +0000 UTC m=+149.968038494" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.594477 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" podStartSLOduration=129.59445964 podStartE2EDuration="2m9.59445964s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.55092382 +0000 UTC m=+149.927272541" watchObservedRunningTime="2026-02-16 12:55:22.59445964 +0000 UTC m=+149.970808371" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.646012 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" podStartSLOduration=129.645986271 podStartE2EDuration="2m9.645986271s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.639192187 +0000 UTC m=+150.015540918" watchObservedRunningTime="2026-02-16 12:55:22.645986271 +0000 UTC m=+150.022334992" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.674029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.674445 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.174431617 +0000 UTC m=+150.550780338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.692339 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" podStartSLOduration=129.692314269 podStartE2EDuration="2m9.692314269s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.674782477 +0000 UTC m=+150.051131198" watchObservedRunningTime="2026-02-16 12:55:22.692314269 +0000 UTC m=+150.068662990" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.707063 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" podStartSLOduration=129.707043603 podStartE2EDuration="2m9.707043603s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.70473585 +0000 UTC m=+150.081084581" watchObservedRunningTime="2026-02-16 12:55:22.707043603 +0000 UTC m=+150.083392324" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.776318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.776409 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.276379654 +0000 UTC m=+150.652728375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.776844 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.777367 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.277350615 +0000 UTC m=+150.653699336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.878047 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.878186 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.378159507 +0000 UTC m=+150.754508228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.878319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.878623 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.378611802 +0000 UTC m=+150.754960523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.979551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.980014 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.479995341 +0000 UTC m=+150.856344062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.080902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.081376 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.581357001 +0000 UTC m=+150.957705722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.182439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.182626 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.682595697 +0000 UTC m=+151.058944418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.182752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.183094 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.683084792 +0000 UTC m=+151.059433513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.283623 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.283799 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.783776771 +0000 UTC m=+151.160125492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.284211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.284532 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.784521875 +0000 UTC m=+151.160870596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.314491 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.384794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.384933 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.884904043 +0000 UTC m=+151.261252764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.385180 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.385610 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.885599745 +0000 UTC m=+151.261948516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.388410 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.388468 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.389025 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.389088 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.460718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"ab14e6f8c8ce35738b50a2944ef3524e3a9a5c7fca335b701a0d336d0a066778"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.463120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"8afc2c22702047e8d86b6d846960e6f89991e6b27189b2eb9d166331a9c09da9"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.463860 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.475060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2ec09428dfb3b7935771595455f5922604b9bbf1999e6e4251c26536adbabf35"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.486418 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.487577 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.987550393 +0000 UTC m=+151.363899114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.488246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9ccdf7ef0ceff951fcead1563ceb99ab36bb27ca0443dd99de7c03b02e42f980"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.493608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0893981b224d46d2d6220c3481bba88b7b55849a1f366208fe26b48b38a15a3f"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.494310 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.501177 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-28sp5" podStartSLOduration=11.501159031 podStartE2EDuration="11.501159031s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:23.497519027 +0000 UTC m=+150.873867758" watchObservedRunningTime="2026-02-16 12:55:23.501159031 +0000 UTC m=+150.877507752" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.504934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"03a30ba0344dc0500c6ab61eadacc3ebfef5c5d5f676b6d4f1bbf47773b0cc10"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.506951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.567132 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:23 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:23 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:23 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.567202 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.588518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.591000 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.090982788 +0000 UTC m=+151.467331569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.690842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.691098 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.191075057 +0000 UTC m=+151.567423778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.721317 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" podStartSLOduration=130.721288408 podStartE2EDuration="2m10.721288408s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:23.721095142 +0000 UTC m=+151.097443863" watchObservedRunningTime="2026-02-16 12:55:23.721288408 +0000 UTC m=+151.097637139" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.793935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.794685 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.294669787 +0000 UTC m=+151.671018508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.895415 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.895565 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.395542451 +0000 UTC m=+151.771891182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.895639 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.895945 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.395936164 +0000 UTC m=+151.772284885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.996880 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.997005 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.496981523 +0000 UTC m=+151.873330244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.997261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.997594 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.497583722 +0000 UTC m=+151.873932443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.098835 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.099184 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.599165939 +0000 UTC m=+151.975514660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.200033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.200534 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.700495767 +0000 UTC m=+152.076844498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.301652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.301871 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.801840066 +0000 UTC m=+152.178188797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.302017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.302401 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.802380763 +0000 UTC m=+152.178729484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.403164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.403433 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.903407533 +0000 UTC m=+152.279756254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.403792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.404157 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.904142716 +0000 UTC m=+152.280491437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.427109 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.427986 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.432203 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.446548 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504967 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.505029 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.505125 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.005110493 +0000 UTC m=+152.381459214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.523269 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"0e332639c4d13b2001f46e6085d2af79175e59c327a5d639b854fce6d41db658"} Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.526929 4740 generic.go:334] "Generic (PLEG): container finished" podID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerID="907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae" exitCode=0 Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.527320 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerDied","Data":"907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae"} Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.574346 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:24 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:24 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:24 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.574400 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.596880 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.598024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.601888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607696 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607986 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.608857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.609212 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.109195278 +0000 UTC m=+152.485544009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.610408 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.625350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.662583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712527 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.712783 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.212761698 +0000 UTC m=+152.589110429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.713047 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.713342 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.213331885 +0000 UTC m=+152.589680606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.788248 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.797085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.798034 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.801573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.812658 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.813010 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.813049 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814918 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814972 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.815431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.815498 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.31548221 +0000 UTC m=+152.691830931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.815693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.820083 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.879939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.891233 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.891286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.902038 4740 patch_prober.go:28] interesting pod/console-f9d7485db-gctsd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.902097 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.917070 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918372 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.920418 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.420406801 +0000 UTC m=+152.796755522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.004279 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.017741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025462 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025709 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.027346 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.527330396 +0000 UTC m=+152.903679117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.028971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.032042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.035027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.049469 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.058615 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.079514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.133836 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.134645 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135631 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135706 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.136744 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.136971 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.137281 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.637270445 +0000 UTC m=+153.013619166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.148804 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.165208 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236520 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236835 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236927 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.237055 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.737039995 +0000 UTC m=+153.113388716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.237786 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.238236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.276641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.330269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.337992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.338393 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.838382124 +0000 UTC m=+153.214730845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.340111 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.354283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.397883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.399100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.439549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.439696 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.939674251 +0000 UTC m=+153.316022972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.440088 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.466061 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.966045991 +0000 UTC m=+153.342394712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.476763 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.494153 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.542474 4740 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.542689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"c13655b7aa4b36fa8580b7aa0221de76bb11751e6e0640802f813e9192709d55"} Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.547338 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.558963 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:26.058936784 +0000 UTC m=+153.435285515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.559051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.559408 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:26.059398949 +0000 UTC m=+153.435747670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.559582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.564008 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:25 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.564050 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.573013 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.594914 4740 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T12:55:25.542721424Z","Handler":null,"Name":""} Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.599506 4740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.599568 4740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.669439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.700858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.770914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.773764 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.774154 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.827126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.865683 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.890355 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972401 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972483 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972517 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.974029 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.989518 4740 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nqbws container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]log ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]etcd ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 12:55:25 crc kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 12:55:25 crc kubenswrapper[4740]: livez check failed Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.989588 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" podUID="fb14491a-6043-446a-8b10-626838253345" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.991360 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm" (OuterVolumeSpecName: "kube-api-access-pm2wm") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "kube-api-access-pm2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.991936 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.995401 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:25.999622 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.045500 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:26 crc kubenswrapper[4740]: W0216 12:55:26.064053 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod17e62feb_2b96_41e9_9060_492217efc502.slice/crio-88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac WatchSource:0}: Error finding container 88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac: Status 404 returned error can't find the container with id 88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075085 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075149 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075161 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.214976 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.389447 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:26 crc kubenswrapper[4740]: E0216 12:55:26.389986 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.389998 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.390126 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.390852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.392547 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.398898 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480496 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554033 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerStarted","Data":"a9c9a2711d899c1e5a260796e95114ebf5c80382d12c8808c0846487a96c8aa1"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556074 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556106 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556061 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"e2704b65ce01fba3c60e03244a825b4b8122c50b215c9372a0b6818fde2a82aa"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerStarted","Data":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerStarted","Data":"1104556d5cde5c0aa4a407502225880f615d1c9eedcf19e3ada6ce6e63d3b266"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557773 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560622 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:26 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:26 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:26 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560666 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"5351383c72d73b024fe229e7406f0eb4fe7f7cafdebc643e104e79f1b492532a"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562204 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerDied","Data":"065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562221 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562243 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563654 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563716 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563749 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"e130a9aace627f73e9efde47dbcd50406ac735047566ac4275095c2434589e89"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.565454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerStarted","Data":"e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.565497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerStarted","Data":"88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569499 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569565 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerStarted","Data":"30342fb7e4ac42c29ddfbf6e245edd8370e6082c882f3ddc8fc68fa25e67ec8b"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582320 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582726 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.611042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.680522 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" podStartSLOduration=14.680503037 podStartE2EDuration="14.680503037s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.675935813 +0000 UTC m=+154.052284544" watchObservedRunningTime="2026-02-16 12:55:26.680503037 +0000 UTC m=+154.056851758" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.713536 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.713518266 podStartE2EDuration="1.713518266s" podCreationTimestamp="2026-02-16 12:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.710455499 +0000 UTC m=+154.086804220" watchObservedRunningTime="2026-02-16 12:55:26.713518266 +0000 UTC m=+154.089866987" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.739695 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" podStartSLOduration=133.739674108 podStartE2EDuration="2m13.739674108s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.738367087 +0000 UTC m=+154.114715808" watchObservedRunningTime="2026-02-16 12:55:26.739674108 +0000 UTC m=+154.116022829" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.775757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.818198 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.824061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.837301 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886034 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886130 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.903156 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.988581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.989087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.011693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.145827 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.277510 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:27 crc kubenswrapper[4740]: W0216 12:55:27.288601 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd80862_652c_4fa2_a591_44a3cc76379d.slice/crio-f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb WatchSource:0}: Error finding container f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb: Status 404 returned error can't find the container with id f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.294432 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.337026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:27 crc kubenswrapper[4740]: W0216 12:55:27.358475 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80b641a_1e2b_4db3_9298_08042171a404.slice/crio-0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e WatchSource:0}: Error finding container 0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e: Status 404 returned error can't find the container with id 0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.561068 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:27 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:27 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:27 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.561442 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.578669 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.578930 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.579007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerStarted","Data":"0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.584527 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.584995 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.585023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerStarted","Data":"f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.592460 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.594732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.596946 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.601669 4740 generic.go:334] "Generic (PLEG): container finished" podID="17e62feb-2b96-41e9-9060-492217efc502" containerID="e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.601776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerDied","Data":"e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.608295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696137 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.798323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.798910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.818507 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.924729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.991682 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.994149 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.997467 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.101929 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.101997 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.102225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.203934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.204325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.204368 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.205112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.205147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.232611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.334782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.446692 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:28 crc kubenswrapper[4740]: W0216 12:55:28.484340 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb4cf07f_4486_4ff8_88d3_b04296a09ece.slice/crio-bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707 WatchSource:0}: Error finding container bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707: Status 404 returned error can't find the container with id bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707 Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.560579 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:28 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:28 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:28 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.560627 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.596945 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.622100 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707"} Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.951856 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"17e62feb-2b96-41e9-9060-492217efc502\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17e62feb-2b96-41e9-9060-492217efc502" (UID: "17e62feb-2b96-41e9-9060-492217efc502"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"17e62feb-2b96-41e9-9060-492217efc502\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.037006 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.042227 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17e62feb-2b96-41e9-9060-492217efc502" (UID: "17e62feb-2b96-41e9-9060-492217efc502"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.137838 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.559086 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:29 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:29 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:29 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.559158 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636804 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerDied","Data":"88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636866 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636914 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.642795 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" exitCode=0 Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.642867 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653201 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a" exitCode=0 Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653234 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"ce33be103aa47d28e69db79295eb5459d1dc46ee55c5e4d98d8d9854797067ed"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.808513 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.814026 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.558221 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:30 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:30 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:30 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.558289 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.854912 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:30 crc kubenswrapper[4740]: E0216 12:55:30.855159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.855174 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.855298 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.868145 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.868237 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.873206 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.873433 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.969632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.970082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071730 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.110191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.184025 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.567520 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:31 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:31 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:31 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.567581 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.705507 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:32 crc kubenswrapper[4740]: I0216 12:55:32.558879 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:32 crc kubenswrapper[4740]: [+]has-synced ok Feb 16 12:55:32 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:32 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:32 crc kubenswrapper[4740]: I0216 12:55:32.558944 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.393364 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.450610 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.578995 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.592339 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:34 crc kubenswrapper[4740]: I0216 12:55:34.891595 4740 patch_prober.go:28] interesting pod/console-f9d7485db-gctsd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:55:34 crc kubenswrapper[4740]: I0216 12:55:34.891924 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.771920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.784917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.800417 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.880985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerStarted","Data":"249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8"} Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.895750 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.899682 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:45 crc kubenswrapper[4740]: I0216 12:55:45.575494 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:45 crc kubenswrapper[4740]: I0216 12:55:45.575857 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:46 crc kubenswrapper[4740]: I0216 12:55:46.001649 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:55 crc kubenswrapper[4740]: I0216 12:55:55.068115 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.415571 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.416563 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m9jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wwk79_openshift-marketplace(fa69bf39-1ed0-42ba-91f9-c401e7fb9337): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.417884 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.183569 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.289089 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.289439 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfrxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-22crz_openshift-marketplace(70e65531-7cfb-415d-a0a7-25288c2cd5c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.290658 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.306394 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.306527 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb4r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hzpc4_openshift-marketplace(44198116-006f-4be3-ad53-3d32576dd681): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.307632 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.358034 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.358199 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frh8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z48zk_openshift-marketplace(e9545e2f-e72f-4944-bc7a-ed9b052a34b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.359569 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.613505 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:56:00 crc kubenswrapper[4740]: W0216 12:56:00.777974 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12044a18_c0cd_4ce6_a1f8_45e3c10095fb.slice/crio-bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5 WatchSource:0}: Error finding container bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5: Status 404 returned error can't find the container with id bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.963261 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.963302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.967053 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.967094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.975341 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.975417 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.986232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerStarted","Data":"2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.995274 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.998549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5"} Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.000495 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.001413 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.013515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" Feb 16 12:56:01 crc kubenswrapper[4740]: I0216 12:56:01.077546 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=31.077510058 podStartE2EDuration="31.077510058s" podCreationTimestamp="2026-02-16 12:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:01.077457296 +0000 UTC m=+188.453806027" watchObservedRunningTime="2026-02-16 12:56:01.077510058 +0000 UTC m=+188.453858789" Feb 16 12:56:01 crc kubenswrapper[4740]: I0216 12:56:01.606086 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.011542 4740 generic.go:334] "Generic (PLEG): container finished" podID="d95462a9-2f88-47a0-b230-2f824b38a575" containerID="2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f" exitCode=0 Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.011972 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerDied","Data":"2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.015092 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" exitCode=0 Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.015151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.023727 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"af8b0fbc0f7d859e0a6e078815876aa26fd134fecbc94e0361122422e921af21"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.023787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"a244c39bf1f06ab492def06fe6a192fa4c5a20c6bf0a7fadf0a33a59cf7132ce"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.049142 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tcfzx" podStartSLOduration=169.049119461 podStartE2EDuration="2m49.049119461s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:02.04462443 +0000 UTC m=+189.420973151" watchObservedRunningTime="2026-02-16 12:56:02.049119461 +0000 UTC m=+189.425468183" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.031003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.033270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerStarted","Data":"b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.035518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerStarted","Data":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.038311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerStarted","Data":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.052366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrlzg" podStartSLOduration=3.2838232339999998 podStartE2EDuration="36.052354001s" podCreationTimestamp="2026-02-16 12:55:27 +0000 UTC" firstStartedPulling="2026-02-16 12:55:29.644397613 +0000 UTC m=+157.020746334" lastFinishedPulling="2026-02-16 12:56:02.41292838 +0000 UTC m=+189.789277101" observedRunningTime="2026-02-16 12:56:03.049888964 +0000 UTC m=+190.426237685" watchObservedRunningTime="2026-02-16 12:56:03.052354001 +0000 UTC m=+190.428702722" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.068876 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smtc5" podStartSLOduration=3.560533809 podStartE2EDuration="39.06885653s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.570676421 +0000 UTC m=+153.947025142" lastFinishedPulling="2026-02-16 12:56:02.078999142 +0000 UTC m=+189.455347863" observedRunningTime="2026-02-16 12:56:03.067523858 +0000 UTC m=+190.443872589" watchObservedRunningTime="2026-02-16 12:56:03.06885653 +0000 UTC m=+190.445205251" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.091021 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gcgnl" podStartSLOduration=2.8033915780000003 podStartE2EDuration="37.091001197s" podCreationTimestamp="2026-02-16 12:55:26 +0000 UTC" firstStartedPulling="2026-02-16 12:55:27.580871549 +0000 UTC m=+154.957220270" lastFinishedPulling="2026-02-16 12:56:01.868481178 +0000 UTC m=+189.244829889" observedRunningTime="2026-02-16 12:56:03.088588561 +0000 UTC m=+190.464937282" watchObservedRunningTime="2026-02-16 12:56:03.091001197 +0000 UTC m=+190.467349918" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.113929 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tbqv5" podStartSLOduration=2.751740623 podStartE2EDuration="37.113910548s" podCreationTimestamp="2026-02-16 12:55:26 +0000 UTC" firstStartedPulling="2026-02-16 12:55:27.593010161 +0000 UTC m=+154.969358882" lastFinishedPulling="2026-02-16 12:56:01.955180096 +0000 UTC m=+189.331528807" observedRunningTime="2026-02-16 12:56:03.11365096 +0000 UTC m=+190.489999681" watchObservedRunningTime="2026-02-16 12:56:03.113910548 +0000 UTC m=+190.490259269" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.395343 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"d95462a9-2f88-47a0-b230-2f824b38a575\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424215 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d95462a9-2f88-47a0-b230-2f824b38a575" (UID: "d95462a9-2f88-47a0-b230-2f824b38a575"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424366 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"d95462a9-2f88-47a0-b230-2f824b38a575\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424994 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.446836 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d95462a9-2f88-47a0-b230-2f824b38a575" (UID: "d95462a9-2f88-47a0-b230-2f824b38a575"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.526741 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.686277 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045400 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerDied","Data":"249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8"} Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045493 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.918757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.919120 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:05 crc kubenswrapper[4740]: I0216 12:56:05.046021 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.777312 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.777371 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.828011 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.096712 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.147441 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.147512 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.198186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.925685 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.925757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:08 crc kubenswrapper[4740]: I0216 12:56:08.103757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:08 crc kubenswrapper[4740]: I0216 12:56:08.959216 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrlzg" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" probeResult="failure" output=< Feb 16 12:56:08 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 12:56:08 crc kubenswrapper[4740]: > Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.194740 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258153 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:09 crc kubenswrapper[4740]: E0216 12:56:09.258389 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258416 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258542 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258988 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.261515 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.261591 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.306790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.306856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.314727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408658 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.432805 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.576281 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.972018 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:10 crc kubenswrapper[4740]: I0216 12:56:10.079654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerStarted","Data":"1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d"} Feb 16 12:56:10 crc kubenswrapper[4740]: I0216 12:56:10.079869 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gcgnl" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" containerID="cri-o://b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" gracePeriod=2 Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.293845 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" exitCode=0 Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.293905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294256 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294645 4740 scope.go:117] "RemoveContainer" containerID="b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.302994 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerStarted","Data":"39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.322288 4740 scope.go:117] "RemoveContainer" containerID="16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.348089 4740 scope.go:117] "RemoveContainer" containerID="a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.351233 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.351217803 podStartE2EDuration="4.351217803s" podCreationTimestamp="2026-02-16 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:13.351102739 +0000 UTC m=+200.727451480" watchObservedRunningTime="2026-02-16 12:56:13.351217803 +0000 UTC m=+200.727566524" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.488087 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.488229 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities" (OuterVolumeSpecName: "utilities") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489278 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489779 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.493528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg" (OuterVolumeSpecName: "kube-api-access-xjnrg") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "kube-api-access-xjnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.590652 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.099760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.196230 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.310050 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.311512 4740 generic.go:334] "Generic (PLEG): container finished" podID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerID="39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760" exitCode=0 Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.311561 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerDied","Data":"39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760"} Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.346330 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.349376 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.962169 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054247 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054511 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-content" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054528 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-content" Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054549 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-utilities" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054558 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-utilities" Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054577 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054585 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054702 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.055181 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.065878 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.208938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.209261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.209517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.288761 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80b641a-1e2b-4db3-9298-08042171a404" path="/var/lib/kubelet/pods/f80b641a-1e2b-4db3-9298-08042171a404/volumes" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.317457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8"} Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.349470 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.379270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.555787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.574953 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.575036 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.575090 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.576045 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.576166 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" gracePeriod=600 Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.613869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"71598100-ab8f-489f-9a0a-d5396867ddc2\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.613950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"71598100-ab8f-489f-9a0a-d5396867ddc2\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.614095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "71598100-ab8f-489f-9a0a-d5396867ddc2" (UID: "71598100-ab8f-489f-9a0a-d5396867ddc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.614197 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.618601 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "71598100-ab8f-489f-9a0a-d5396867ddc2" (UID: "71598100-ab8f-489f-9a0a-d5396867ddc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.714651 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.777674 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: W0216 12:56:15.790179 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod64be474a_1d70_42d2_aa8b_977624363891.slice/crio-67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6 WatchSource:0}: Error finding container 67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6: Status 404 returned error can't find the container with id 67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324758 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.327614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerStarted","Data":"1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.327655 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerStarted","Data":"67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.329727 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.329766 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerDied","Data":"1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332973 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d" Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332999 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.335388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.338235 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.338278 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.392936 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.39292148 podStartE2EDuration="1.39292148s" podCreationTimestamp="2026-02-16 12:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:16.389545282 +0000 UTC m=+203.765894003" watchObservedRunningTime="2026-02-16 12:56:16.39292148 +0000 UTC m=+203.769270201" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.346835 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.349027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerStarted","Data":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.350832 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" exitCode=0 Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.350913 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.365501 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwk79" podStartSLOduration=3.260086465 podStartE2EDuration="50.365482415s" podCreationTimestamp="2026-02-16 12:55:27 +0000 UTC" firstStartedPulling="2026-02-16 12:55:29.654861211 +0000 UTC m=+157.031209932" lastFinishedPulling="2026-02-16 12:56:16.760257131 +0000 UTC m=+204.136605882" observedRunningTime="2026-02-16 12:56:17.364398619 +0000 UTC m=+204.740747340" watchObservedRunningTime="2026-02-16 12:56:17.365482415 +0000 UTC m=+204.741831136" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.402680 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z48zk" podStartSLOduration=3.156457899 podStartE2EDuration="53.402660272s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.555749641 +0000 UTC m=+153.932098362" lastFinishedPulling="2026-02-16 12:56:16.801952024 +0000 UTC m=+204.178300735" observedRunningTime="2026-02-16 12:56:17.398052753 +0000 UTC m=+204.774401474" watchObservedRunningTime="2026-02-16 12:56:17.402660272 +0000 UTC m=+204.779008993" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.967398 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.009313 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.334907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.334964 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.365443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.386410 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-22crz" podStartSLOduration=3.174872801 podStartE2EDuration="54.386394606s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.557167466 +0000 UTC m=+153.933516187" lastFinishedPulling="2026-02-16 12:56:17.768689271 +0000 UTC m=+205.145037992" observedRunningTime="2026-02-16 12:56:18.384314149 +0000 UTC m=+205.760662880" watchObservedRunningTime="2026-02-16 12:56:18.386394606 +0000 UTC m=+205.762743327" Feb 16 12:56:19 crc kubenswrapper[4740]: I0216 12:56:19.375372 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" probeResult="failure" output=< Feb 16 12:56:19 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 12:56:19 crc kubenswrapper[4740]: > Feb 16 12:56:20 crc kubenswrapper[4740]: I0216 12:56:20.378415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3"} Feb 16 12:56:21 crc kubenswrapper[4740]: I0216 12:56:21.387608 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3" exitCode=0 Feb 16 12:56:21 crc kubenswrapper[4740]: I0216 12:56:21.387750 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3"} Feb 16 12:56:23 crc kubenswrapper[4740]: I0216 12:56:23.405100 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37"} Feb 16 12:56:23 crc kubenswrapper[4740]: I0216 12:56:23.428666 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzpc4" podStartSLOduration=2.77918883 podStartE2EDuration="59.428650825s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.565153757 +0000 UTC m=+153.941502478" lastFinishedPulling="2026-02-16 12:56:23.214615752 +0000 UTC m=+210.590964473" observedRunningTime="2026-02-16 12:56:23.424955147 +0000 UTC m=+210.801303868" watchObservedRunningTime="2026-02-16 12:56:23.428650825 +0000 UTC m=+210.804999546" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.788492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.791364 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.832664 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.166480 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.166684 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.209244 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.401235 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.401329 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.445667 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.471377 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.494155 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.395904 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.443367 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" containerID="cri-o://c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" gracePeriod=2 Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.795599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965764 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965912 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.968062 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities" (OuterVolumeSpecName: "utilities") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.978261 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h" (OuterVolumeSpecName: "kube-api-access-frh8h") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "kube-api-access-frh8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.025365 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067329 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067360 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067372 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.380141 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.418917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449626 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" exitCode=0 Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449719 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"a9c9a2711d899c1e5a260796e95114ebf5c80382d12c8808c0846487a96c8aa1"} Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449797 4740 scope.go:117] "RemoveContainer" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.463313 4740 scope.go:117] "RemoveContainer" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.483036 4740 scope.go:117] "RemoveContainer" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.497692 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.501878 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.501939 4740 scope.go:117] "RemoveContainer" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.502435 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": container with ID starting with c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2 not found: ID does not exist" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502471 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} err="failed to get container status \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": rpc error: code = NotFound desc = could not find container \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": container with ID starting with c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2 not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502511 4740 scope.go:117] "RemoveContainer" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.502945 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": container with ID starting with 671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc not found: ID does not exist" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502981 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc"} err="failed to get container status \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": rpc error: code = NotFound desc = could not find container \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": container with ID starting with 671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502996 4740 scope.go:117] "RemoveContainer" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.503259 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": container with ID starting with 1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88 not found: ID does not exist" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.503281 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88"} err="failed to get container status \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": rpc error: code = NotFound desc = could not find container \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": container with ID starting with 1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88 not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.717649 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" containerID="cri-o://fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" gracePeriod=15 Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.289569 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" path="/var/lib/kubelet/pods/e9545e2f-e72f-4944-bc7a-ed9b052a34b0/volumes" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.457504 4740 generic.go:334] "Generic (PLEG): container finished" podID="a9a22462-173f-4075-927a-30493a5745d7" containerID="fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" exitCode=0 Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.457590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerDied","Data":"fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440"} Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.768705 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.899907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900662 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900760 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900899 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900982 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901032 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901077 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901095 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902687 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902713 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902851 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.903105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.903324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906151 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906374 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906549 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72" (OuterVolumeSpecName: "kube-api-access-n5w72") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "kube-api-access-n5w72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.908830 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.909158 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.004487 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005132 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005325 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005490 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005659 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005891 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006051 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006181 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006356 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006480 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006631 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006776 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerDied","Data":"b1fdab80b8055470789558626b94e6fd689f065930bcfe2c60fd34eb94175732"} Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467847 4740 scope.go:117] "RemoveContainer" containerID="fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467907 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.508018 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.521332 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.288381 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a22462-173f-4075-927a-30493a5745d7" path="/var/lib/kubelet/pods/a9a22462-173f-4075-927a-30493a5745d7/volumes" Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.798613 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.799018 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" containerID="cri-o://3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" gracePeriod=2 Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.482457 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" exitCode=0 Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.482680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1"} Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.703622 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845665 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845797 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.847697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities" (OuterVolumeSpecName: "utilities") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.861126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh" (OuterVolumeSpecName: "kube-api-access-7m9jh") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "kube-api-access-7m9jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.947578 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.947622 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.974837 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.048235 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"ce33be103aa47d28e69db79295eb5459d1dc46ee55c5e4d98d8d9854797067ed"} Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488334 4740 scope.go:117] "RemoveContainer" containerID="3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488368 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.505214 4740 scope.go:117] "RemoveContainer" containerID="512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.506952 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.510080 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.518888 4740 scope.go:117] "RemoveContainer" containerID="2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a" Feb 16 12:56:35 crc kubenswrapper[4740]: I0216 12:56:35.206214 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:35 crc kubenswrapper[4740]: I0216 12:56:35.290480 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" path="/var/lib/kubelet/pods/fa69bf39-1ed0-42ba-91f9-c401e7fb9337/volumes" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.197227 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.197731 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" containerID="cri-o://b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" gracePeriod=2 Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471506 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471787 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471829 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471844 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471853 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471865 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471874 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471885 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471892 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471904 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471911 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471925 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471934 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471948 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471955 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471969 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472105 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472118 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472128 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472141 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.474862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.476618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.476792 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.479241 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.479527 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.480952 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.480987 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.481054 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.482354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.483049 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.484310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.484335 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490796 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.495095 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.501247 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.523923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.525214 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.528578 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" exitCode=0 Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.528622 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37"} Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593491 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593532 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.596237 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.597669 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.599870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.600256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.600546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.601548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.602347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.602938 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.605369 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.611161 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.612025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.799287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.168130 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200857 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200934 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200985 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.201824 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities" (OuterVolumeSpecName: "utilities") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.205070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9" (OuterVolumeSpecName: "kube-api-access-xb4r9") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "kube-api-access-xb4r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.230394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:37 crc kubenswrapper[4740]: W0216 12:56:37.242030 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e156096_dd90_4dd0_80ba_42d0642822ee.slice/crio-c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf WatchSource:0}: Error finding container c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf: Status 404 returned error can't find the container with id c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.258219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302707 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302747 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302759 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537228 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"e130a9aace627f73e9efde47dbcd50406ac735047566ac4275095c2434589e89"} Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537522 4740 scope.go:117] "RemoveContainer" containerID="b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537258 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.538226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" event={"ID":"0e156096-dd90-4dd0-80ba-42d0642822ee","Type":"ContainerStarted","Data":"c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf"} Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.555305 4740 scope.go:117] "RemoveContainer" containerID="9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.558098 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.562586 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.567766 4740 scope.go:117] "RemoveContainer" containerID="2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.544926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" event={"ID":"0e156096-dd90-4dd0-80ba-42d0642822ee","Type":"ContainerStarted","Data":"f90e71c3ade1b222a48b125680139be9e90427c868532cbcc36fa33a78369fef"} Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.546307 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.551353 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.564105 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" podStartSLOduration=35.564093185 podStartE2EDuration="35.564093185s" podCreationTimestamp="2026-02-16 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:38.563422943 +0000 UTC m=+225.939771654" watchObservedRunningTime="2026-02-16 12:56:38.564093185 +0000 UTC m=+225.940441906" Feb 16 12:56:39 crc kubenswrapper[4740]: I0216 12:56:39.288866 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44198116-006f-4be3-ad53-3d32576dd681" path="/var/lib/kubelet/pods/44198116-006f-4be3-ad53-3d32576dd681/volumes" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768113 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768883 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-utilities" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768898 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-utilities" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768937 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768960 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-content" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768967 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-content" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769125 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769619 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769668 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770313 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770401 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770382 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770356 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770538 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770559 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770574 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770584 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770598 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770607 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770617 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770625 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770636 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770644 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770655 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770663 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770673 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770680 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770395 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770831 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770847 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770856 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770865 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770874 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770883 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.771064 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.774368 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.812867 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.924928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.924985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925692 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026852 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026874 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027036 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027073 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.101682 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: W0216 12:56:54.132715 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd WatchSource:0}: Error finding container fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd: Status 404 returned error can't find the container with id fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.138668 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.377147 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.377884 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.378385 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.378628 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.379101 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.379175 4740 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.379878 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.422438 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.580547 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.631429 4740 generic.go:334] "Generic (PLEG): container finished" podID="64be474a-1d70-42d2-aa8b-977624363891" containerID="1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.632158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerDied","Data":"1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.633441 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634119 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634861 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.635620 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.636202 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.639497 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.640860 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641490 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641511 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641521 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641529 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" exitCode=2 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641561 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.981565 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.654185 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:55 crc kubenswrapper[4740]: E0216 12:56:55.782498 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.984027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.984872 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.985056 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.071650 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.072503 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073179 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073724 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073983 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock" (OuterVolumeSpecName: "var-lock") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177348 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177450 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177496 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177615 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177770 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177794 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177862 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177889 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177901 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.182194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.279359 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerDied","Data":"67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6"} Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666546 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666578 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.669695 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670655 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" exitCode=0 Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670717 4740 scope.go:117] "RemoveContainer" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670836 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.682045 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.682944 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.683412 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.686973 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.687478 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.688046 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.692983 4740 scope.go:117] "RemoveContainer" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.717090 4740 scope.go:117] "RemoveContainer" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.730872 4740 scope.go:117] "RemoveContainer" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.742098 4740 scope.go:117] "RemoveContainer" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.755351 4740 scope.go:117] "RemoveContainer" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.776617 4740 scope.go:117] "RemoveContainer" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.777294 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": container with ID starting with f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72 not found: ID does not exist" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.777338 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72"} err="failed to get container status \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": rpc error: code = NotFound desc = could not find container \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": container with ID starting with f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.777375 4740 scope.go:117] "RemoveContainer" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.778374 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": container with ID starting with 513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb not found: ID does not exist" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778401 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb"} err="failed to get container status \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": rpc error: code = NotFound desc = could not find container \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": container with ID starting with 513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778417 4740 scope.go:117] "RemoveContainer" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.778769 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": container with ID starting with 3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6 not found: ID does not exist" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778834 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6"} err="failed to get container status \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": rpc error: code = NotFound desc = could not find container \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": container with ID starting with 3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778871 4740 scope.go:117] "RemoveContainer" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779274 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": container with ID starting with f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4 not found: ID does not exist" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779323 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4"} err="failed to get container status \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": rpc error: code = NotFound desc = could not find container \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": container with ID starting with f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779350 4740 scope.go:117] "RemoveContainer" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779646 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": container with ID starting with 0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32 not found: ID does not exist" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779670 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32"} err="failed to get container status \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": rpc error: code = NotFound desc = could not find container \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": container with ID starting with 0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779685 4740 scope.go:117] "RemoveContainer" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779999 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": container with ID starting with 92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5 not found: ID does not exist" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.780084 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5"} err="failed to get container status \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": rpc error: code = NotFound desc = could not find container \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": container with ID starting with 92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5 not found: ID does not exist" Feb 16 12:56:57 crc kubenswrapper[4740]: I0216 12:56:57.290157 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 12:56:57 crc kubenswrapper[4740]: E0216 12:56:57.383700 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 16 12:57:00 crc kubenswrapper[4740]: E0216 12:57:00.584430 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Feb 16 12:57:03 crc kubenswrapper[4740]: I0216 12:57:03.283295 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:03 crc kubenswrapper[4740]: I0216 12:57:03.284105 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:04 crc kubenswrapper[4740]: E0216 12:57:04.423361 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.280167 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.281705 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.282280 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.301027 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.301087 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.301730 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.302190 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: W0216 12:57:06.324992 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675 WatchSource:0}: Error finding container 70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675: Status 404 returned error can't find the container with id 70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675 Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736775 4740 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cde0837214fafd0fe31cb3cc2d39a4f5a6bdafa66ea80202d737e133015cd944" exitCode=0 Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cde0837214fafd0fe31cb3cc2d39a4f5a6bdafa66ea80202d737e133015cd944"} Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675"} Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737491 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737539 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737836 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.738133 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.738242 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.985708 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ae77b89f3842905ed9c4faa69998a8e3b19fe2701429b64ce778e7b8fa5ac39"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746799 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be9a7b7b7e1acdd23322a4b3439e8199407e4d11022101adfeacad6f6234070c"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"59cc29256725447f10463fb188d447715c65be3de0e68d019c75af8c7bdc4329"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746844 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c94ac9c7dd9b8c886e2dc911d18c5cb054c6ec5303cdbcdfdf34cbd1700e34a"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749625 4740 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6" exitCode=1 Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.751006 4740 scope.go:117] "RemoveContainer" containerID="de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.757759 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.758205 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec689bc0b6eac6457c59bbbccd852365ae956d00ef4ab3b43e54faa45aed03ca"} Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.761868 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec"} Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762078 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762154 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762170 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:10 crc kubenswrapper[4740]: I0216 12:57:10.305477 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.303550 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.303637 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.310879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.773235 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.804883 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.806692 4740 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec" exitCode=255 Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.806771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec"} Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.807316 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.807356 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.811037 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4bce1393-19e6-43be-bc32-b015f9dd4593" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.811283 4740 scope.go:117] "RemoveContainer" containerID="fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.814584 4740 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://6c94ac9c7dd9b8c886e2dc911d18c5cb054c6ec5303cdbcdfdf34cbd1700e34a" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.814614 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.815170 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3a165460d95a26aacb38df0d4126c1baa3f95a50b17cf37d4c030a2208dab07"} Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824369 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824461 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824492 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.931504 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.938108 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:15 crc kubenswrapper[4740]: I0216 12:57:15.830588 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:15 crc kubenswrapper[4740]: I0216 12:57:15.830876 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:20 crc kubenswrapper[4740]: I0216 12:57:20.309967 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:22 crc kubenswrapper[4740]: I0216 12:57:22.928956 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:57:23 crc kubenswrapper[4740]: I0216 12:57:23.265854 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:57:23 crc kubenswrapper[4740]: I0216 12:57:23.299620 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4bce1393-19e6-43be-bc32-b015f9dd4593" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.287793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.741558 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.911562 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.911600 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.997178 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.052968 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.315632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.489700 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.614626 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.778560 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.977130 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.093645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.202249 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.266132 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.424055 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.429973 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.651581 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.662543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.710052 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.946113 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.979292 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.987100 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.056791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.169150 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.280056 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.303875 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.386316 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.421387 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.422021 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.454100 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.587045 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.623307 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.717622 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.748612 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.749427 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.035709 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.105299 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.147215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.250686 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.318035 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.351105 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.467016 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.478991 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.761743 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.767960 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.011757 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.019979 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.083540 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.083849 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.100966 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.230306 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.293170 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.417144 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.546859 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.562142 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.625754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.834601 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.962503 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.984505 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.994059 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.999508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.005679 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.127896 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.155319 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.193223 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.237920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.241928 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.340450 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.378754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.401709 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.418277 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.429044 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.553322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.571667 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.635667 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.637053 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.715413 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.740999 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.752869 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.824233 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.844256 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.918221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.995110 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.005122 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.057951 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.079384 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.094713 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.098192 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.126606 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.149880 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.214223 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.387003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.406243 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.413906 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.437645 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.442898 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.448224 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.453687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.485682 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.582765 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.597911 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.653704 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.784914 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.828345 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.898486 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.904418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.933655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.036712 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.083930 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.175421 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.185957 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.230335 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.250183 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.292348 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.332094 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.451335 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.480473 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.510206 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.554141 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.558396 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.617970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.655403 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.709956 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.715458 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.755679 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.772997 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.795404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.799864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.837310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.930787 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.948848 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.025402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.103861 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.208163 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.221737 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.249157 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.277340 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.302843 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.348374 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.401752 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.414843 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.542366 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.568195 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.601116 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.659662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.660888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.707339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.795977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.814721 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.863872 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.888835 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.960860 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.984664 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.010520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.062918 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.130472 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.168436 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.227083 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.234511 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.314734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.314777 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.358656 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.374332 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.437612 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.608873 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.704172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.728267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.735093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.774520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.788625 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.809835 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.902395 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.924882 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.988703 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.008644 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.095019 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.196877 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.403767 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.426984 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.452711 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.495157 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.504938 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.527312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.589592 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.604286 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.688072 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.709262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.722175 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.783393 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.785447 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.887783 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.925077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.927196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.092150 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.157551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.159727 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.384003 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.393364 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.411046 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.436324 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.441604 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.453710 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.567767 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.587166 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.676948 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.697849 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.772694 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.835688 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.840941 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.967707 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.972588 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.982224 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.984979 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.252661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.271364 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.552267 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.562627 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.617432 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.642499 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.643746 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.810006 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.837943 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.074995 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.317687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.348678 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.355051 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.416693 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.498302 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.545129 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.682254 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.798929 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.841410 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.931291 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.062349 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.276825 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.279366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.27934079 podStartE2EDuration="46.27934079s" podCreationTimestamp="2026-02-16 12:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:13.338619595 +0000 UTC m=+260.714968386" watchObservedRunningTime="2026-02-16 12:57:39.27934079 +0000 UTC m=+286.655689521" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.288187 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.288236 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.294286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.316583 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.316563218 podStartE2EDuration="26.316563218s" podCreationTimestamp="2026-02-16 12:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:39.308188239 +0000 UTC m=+286.684536980" watchObservedRunningTime="2026-02-16 12:57:39.316563218 +0000 UTC m=+286.692911949" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.953334 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.024285 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.276626 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.408228 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.476711 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.486586 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.751263 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.796494 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.334853 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.377499 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.660288 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.782080 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:57:42 crc kubenswrapper[4740]: I0216 12:57:42.402878 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:57:42 crc kubenswrapper[4740]: I0216 12:57:42.676404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:57:43 crc kubenswrapper[4740]: I0216 12:57:43.224121 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:57:47 crc kubenswrapper[4740]: I0216 12:57:47.326618 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:47 crc kubenswrapper[4740]: I0216 12:57:47.327526 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" gracePeriod=5 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.052945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.053478 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" containerID="cri-o://3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.071930 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.072239 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smtc5" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" containerID="cri-o://a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.081649 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.081913 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" containerID="cri-o://b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.088931 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.089286 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tbqv5" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" containerID="cri-o://968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.107529 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.107805 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrlzg" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" containerID="cri-o://0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127102 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: E0216 12:57:48.127363 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127382 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: E0216 12:57:48.127392 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127399 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127482 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127495 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172406 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.180896 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.278027 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.287999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.296969 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.489274 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.493628 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.524052 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.528916 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.531899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.579997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580091 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580138 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580174 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580199 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580218 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580250 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580282 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580335 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580963 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.581446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities" (OuterVolumeSpecName: "utilities") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.581958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities" (OuterVolumeSpecName: "utilities") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.582599 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities" (OuterVolumeSpecName: "utilities") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.585715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.595594 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619093 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb" (OuterVolumeSpecName: "kube-api-access-lfrxb") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "kube-api-access-lfrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4" (OuterVolumeSpecName: "kube-api-access-89vb4") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "kube-api-access-89vb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619198 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j" (OuterVolumeSpecName: "kube-api-access-jk44j") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "kube-api-access-jk44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.621237 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8" (OuterVolumeSpecName: "kube-api-access-wbrl8") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "kube-api-access-wbrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.631342 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.667484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.681440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.681493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.682867 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities" (OuterVolumeSpecName: "utilities") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.688983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689448 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689465 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689476 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689490 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689500 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689509 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689518 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689526 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689534 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689543 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689551 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689559 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.692070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l" (OuterVolumeSpecName: "kube-api-access-l9s2l") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "kube-api-access-l9s2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.706668 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.726437 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.775387 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790339 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790379 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790394 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024685 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" event={"ID":"db2dd193-ab4e-4011-988a-d516f2da367e","Type":"ContainerStarted","Data":"48c754b80487afff4b365f84d5b4b7eb63216492fe5e150f580c94fdc92e82b1"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" event={"ID":"db2dd193-ab4e-4011-988a-d516f2da367e","Type":"ContainerStarted","Data":"3e9d0deb96d52dc1803a334ccb068715b43914ab065a3f0f1f47258379f9e2a9"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024759 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025863 4740 generic.go:334] "Generic (PLEG): container finished" podID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerDied","Data":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025949 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025974 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerDied","Data":"ecc39d12cb6ac857f193b234c0c65095915f019fb5a183124161212d668749a6"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025995 4740 scope.go:117] "RemoveContainer" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.027490 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xsssg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.027601 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" podUID="db2dd193-ab4e-4011-988a-d516f2da367e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028080 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028133 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032425 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032521 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032560 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"30342fb7e4ac42c29ddfbf6e245edd8370e6082c882f3ddc8fc68fa25e67ec8b"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032524 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034769 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034885 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034952 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041170 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"e2704b65ce01fba3c60e03244a825b4b8122c50b215c9372a0b6818fde2a82aa"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041321 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.050291 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" podStartSLOduration=1.05027396 podStartE2EDuration="1.05027396s" podCreationTimestamp="2026-02-16 12:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:49.046643723 +0000 UTC m=+296.422992454" watchObservedRunningTime="2026-02-16 12:57:49.05027396 +0000 UTC m=+296.426622681" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055144 4740 scope.go:117] "RemoveContainer" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.055689 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": container with ID starting with b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf not found: ID does not exist" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055738 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} err="failed to get container status \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": rpc error: code = NotFound desc = could not find container \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": container with ID starting with b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055771 4740 scope.go:117] "RemoveContainer" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.075209 4740 scope.go:117] "RemoveContainer" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.086019 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.093204 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.096476 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.100891 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.107133 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.110965 4740 scope.go:117] "RemoveContainer" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.114275 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.120147 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.129986 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.133487 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.134706 4740 scope.go:117] "RemoveContainer" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.135447 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": container with ID starting with 968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b not found: ID does not exist" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135477 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} err="failed to get container status \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": rpc error: code = NotFound desc = could not find container \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": container with ID starting with 968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135497 4740 scope.go:117] "RemoveContainer" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.135736 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": container with ID starting with 03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc not found: ID does not exist" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135752 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc"} err="failed to get container status \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": rpc error: code = NotFound desc = could not find container \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": container with ID starting with 03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135764 4740 scope.go:117] "RemoveContainer" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.136023 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": container with ID starting with fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8 not found: ID does not exist" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.136040 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8"} err="failed to get container status \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": rpc error: code = NotFound desc = could not find container \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": container with ID starting with fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.136052 4740 scope.go:117] "RemoveContainer" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.137957 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.147975 4740 scope.go:117] "RemoveContainer" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.164156 4740 scope.go:117] "RemoveContainer" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.175435 4740 scope.go:117] "RemoveContainer" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.175915 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": container with ID starting with a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483 not found: ID does not exist" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176033 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} err="failed to get container status \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": rpc error: code = NotFound desc = could not find container \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": container with ID starting with a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176127 4740 scope.go:117] "RemoveContainer" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.176439 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": container with ID starting with acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976 not found: ID does not exist" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176460 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976"} err="failed to get container status \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": rpc error: code = NotFound desc = could not find container \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": container with ID starting with acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176474 4740 scope.go:117] "RemoveContainer" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.176819 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": container with ID starting with c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106 not found: ID does not exist" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176909 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106"} err="failed to get container status \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": rpc error: code = NotFound desc = could not find container \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": container with ID starting with c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.177063 4740 scope.go:117] "RemoveContainer" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.193760 4740 scope.go:117] "RemoveContainer" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.210384 4740 scope.go:117] "RemoveContainer" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.285166 4740 scope.go:117] "RemoveContainer" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.286583 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": container with ID starting with 0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2 not found: ID does not exist" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286624 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} err="failed to get container status \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": rpc error: code = NotFound desc = could not find container \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": container with ID starting with 0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286652 4740 scope.go:117] "RemoveContainer" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.286892 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": container with ID starting with 6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae not found: ID does not exist" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286916 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} err="failed to get container status \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": rpc error: code = NotFound desc = could not find container \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": container with ID starting with 6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286930 4740 scope.go:117] "RemoveContainer" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.287160 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": container with ID starting with 9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d not found: ID does not exist" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.287180 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d"} err="failed to get container status \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": rpc error: code = NotFound desc = could not find container \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": container with ID starting with 9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.287211 4740 scope.go:117] "RemoveContainer" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.288412 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" path="/var/lib/kubelet/pods/14e85e39-c3bc-4944-8b13-a4e405ccafdc/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.289330 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" path="/var/lib/kubelet/pods/4fd80862-652c-4fa2-a591-44a3cc76379d/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.289978 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" path="/var/lib/kubelet/pods/70e65531-7cfb-415d-a0a7-25288c2cd5c8/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.291197 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" path="/var/lib/kubelet/pods/eb4cf07f-4486-4ff8-88d3-b04296a09ece/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.293666 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" path="/var/lib/kubelet/pods/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.320240 4740 scope.go:117] "RemoveContainer" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.341238 4740 scope.go:117] "RemoveContainer" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.355749 4740 scope.go:117] "RemoveContainer" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.356160 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": container with ID starting with 3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9 not found: ID does not exist" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356204 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} err="failed to get container status \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": rpc error: code = NotFound desc = could not find container \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": container with ID starting with 3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356225 4740 scope.go:117] "RemoveContainer" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.356567 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": container with ID starting with 0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7 not found: ID does not exist" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356709 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} err="failed to get container status \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": rpc error: code = NotFound desc = could not find container \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": container with ID starting with 0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356920 4740 scope.go:117] "RemoveContainer" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.358484 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": container with ID starting with 681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf not found: ID does not exist" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.358530 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf"} err="failed to get container status \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": rpc error: code = NotFound desc = could not find container \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": container with ID starting with 681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf not found: ID does not exist" Feb 16 12:57:50 crc kubenswrapper[4740]: I0216 12:57:50.055531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.908283 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.909651 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.941881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942147 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942454 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942050 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942277 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943196 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943286 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943348 4740 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943417 4740 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.951758 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.044254 4740 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068144 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068186 4740 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" exitCode=137 Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068225 4740 scope.go:117] "RemoveContainer" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068320 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.080954 4740 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.089194 4740 scope.go:117] "RemoveContainer" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: E0216 12:57:53.091221 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": container with ID starting with d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7 not found: ID does not exist" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.091253 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7"} err="failed to get container status \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": rpc error: code = NotFound desc = could not find container \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": container with ID starting with d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7 not found: ID does not exist" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.287770 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.288155 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.300724 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.300784 4740 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f1fcf95b-d1a1-43f9-a05e-2f6cdb428a46" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.308594 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.308641 4740 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f1fcf95b-d1a1-43f9-a05e-2f6cdb428a46" Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.737649 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.738449 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" containerID="cri-o://8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" gracePeriod=30 Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.855033 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.855538 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" containerID="cri-o://ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" gracePeriod=30 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.133094 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135502 4740 generic.go:334] "Generic (PLEG): container finished" podID="798d1269-3882-45e8-898e-a625cf386089" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" exitCode=0 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerDied","Data":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerDied","Data":"29e6c5dab661956c91b79a723fe07411f83f7e5c787f55a2531731add29989ac"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135599 4740 scope.go:117] "RemoveContainer" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135653 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.137904 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerID="ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" exitCode=0 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.137934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerDied","Data":"ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.166412 4740 scope.go:117] "RemoveContainer" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: E0216 12:58:07.168802 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": container with ID starting with 8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f not found: ID does not exist" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.168858 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} err="failed to get container status \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": rpc error: code = NotFound desc = could not find container \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": container with ID starting with 8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f not found: ID does not exist" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251485 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.252654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.252670 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca" (OuterVolumeSpecName: "client-ca") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.253116 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config" (OuterVolumeSpecName: "config") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.257437 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.257481 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r" (OuterVolumeSpecName: "kube-api-access-bkn8r") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "kube-api-access-bkn8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.264142 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353019 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353071 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353401 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353420 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353431 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353440 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353448 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353758 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config" (OuterVolumeSpecName: "config") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353798 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca" (OuterVolumeSpecName: "client-ca") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.356190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.356260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj" (OuterVolumeSpecName: "kube-api-access-h6tnj") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "kube-api-access-h6tnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.453381 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454468 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454525 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454542 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454557 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.458376 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.143964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerDied","Data":"fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447"} Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.144013 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.144015 4740 scope.go:117] "RemoveContainer" containerID="ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.176525 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.179347 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.371742 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372022 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372057 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372066 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372077 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372099 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372109 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372120 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372129 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372141 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372151 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372163 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372172 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372180 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372190 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372202 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372210 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372219 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372227 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372239 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372247 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372260 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372267 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372278 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372286 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372297 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372304 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372315 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372323 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372429 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372443 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372460 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372479 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372489 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372500 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375102 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375181 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375331 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375383 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375508 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.376044 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.376657 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.384713 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.384734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385015 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385030 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385235 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385235 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.395262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.395656 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.400209 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.465976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466062 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466237 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466384 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568032 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568993 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.569081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.569291 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.570857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.572448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.573043 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.589897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.592472 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.687517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.694182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.072844 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:09 crc kubenswrapper[4740]: W0216 12:58:09.076844 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae WatchSource:0}: Error finding container 0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae: Status 404 returned error can't find the container with id 0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.107889 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:09 crc kubenswrapper[4740]: W0216 12:58:09.112442 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb211397a_75e6_4a63_9e58_5320e07554e9.slice/crio-647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc WatchSource:0}: Error finding container 647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc: Status 404 returned error can't find the container with id 647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.151519 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerStarted","Data":"647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc"} Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.153185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerStarted","Data":"0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae"} Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.299684 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" path="/var/lib/kubelet/pods/3b3c2258-4f58-414c-a893-c721b5ac9c03/volumes" Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.302032 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798d1269-3882-45e8-898e-a625cf386089" path="/var/lib/kubelet/pods/798d1269-3882-45e8-898e-a625cf386089/volumes" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.159431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerStarted","Data":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.159854 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.160736 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerStarted","Data":"0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd"} Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.161022 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.169197 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.169418 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.176712 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" podStartSLOduration=2.176695034 podStartE2EDuration="2.176695034s" podCreationTimestamp="2026-02-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:10.174341568 +0000 UTC m=+317.550690289" watchObservedRunningTime="2026-02-16 12:58:10.176695034 +0000 UTC m=+317.553043755" Feb 16 12:58:15 crc kubenswrapper[4740]: I0216 12:58:15.575908 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:58:15 crc kubenswrapper[4740]: I0216 12:58:15.576659 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.737236 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" podStartSLOduration=18.73721805 podStartE2EDuration="18.73721805s" podCreationTimestamp="2026-02-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:10.20761917 +0000 UTC m=+317.583967901" watchObservedRunningTime="2026-02-16 12:58:26.73721805 +0000 UTC m=+334.113566771" Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.741284 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.741995 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" containerID="cri-o://0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" gracePeriod=30 Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.258626 4740 generic.go:334] "Generic (PLEG): container finished" podID="b211397a-75e6-4a63-9e58-5320e07554e9" containerID="0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" exitCode=0 Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.258748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerDied","Data":"0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd"} Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.259055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerDied","Data":"647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc"} Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.259079 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.274668 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401454 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402918 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config" (OuterVolumeSpecName: "config") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.408277 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q" (OuterVolumeSpecName: "kube-api-access-4644q") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "kube-api-access-4644q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.408334 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.502987 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503023 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503036 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503048 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503058 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.263143 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.290244 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.293139 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.569754 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:28 crc kubenswrapper[4740]: E0216 12:58:28.570037 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570054 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570163 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.573896 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574271 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574294 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574375 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574638 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574778 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.575661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.586372 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.700931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701018 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.803257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.803935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.804213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.806652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.818084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.894627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.095146 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" event={"ID":"f2681eeb-b24e-4bc4-ada3-f35b91e302bc","Type":"ContainerStarted","Data":"67f4b0884927a7ac6f28c18b0952c2a395c85ea5887aac6c26097e0b5ac10786"} Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268644 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" event={"ID":"f2681eeb-b24e-4bc4-ada3-f35b91e302bc","Type":"ContainerStarted","Data":"5aa610d805d137258a354c7b423e8461ce9c59e766c64e9de3a901e230c5683e"} Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268848 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.270456 4740 patch_prober.go:28] interesting pod/controller-manager-5b9866ffcd-z8ptb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.270497 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" podUID="f2681eeb-b24e-4bc4-ada3-f35b91e302bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.293611 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" path="/var/lib/kubelet/pods/b211397a-75e6-4a63-9e58-5320e07554e9/volumes" Feb 16 12:58:30 crc kubenswrapper[4740]: I0216 12:58:30.278139 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:30 crc kubenswrapper[4740]: I0216 12:58:30.295234 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" podStartSLOduration=4.295213278 podStartE2EDuration="4.295213278s" podCreationTimestamp="2026-02-16 12:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:29.289196448 +0000 UTC m=+336.665545169" watchObservedRunningTime="2026-02-16 12:58:30.295213278 +0000 UTC m=+337.671562009" Feb 16 12:58:45 crc kubenswrapper[4740]: I0216 12:58:45.575021 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:58:45 crc kubenswrapper[4740]: I0216 12:58:45.575802 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.441744 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.443138 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.467433 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602806 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.603026 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.603097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.621795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704530 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704636 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.705634 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.706550 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.706584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.710681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.712418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.722534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.724399 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.760294 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.153366 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" event={"ID":"7abcf159-ee53-4f68-8e0d-aa863b58e081","Type":"ContainerStarted","Data":"ab8b0055f252dfd52267f967f66d04886a020a9111617d4cdc70ff990edab77e"} Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449466 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" event={"ID":"7abcf159-ee53-4f68-8e0d-aa863b58e081","Type":"ContainerStarted","Data":"d31fa87a46dbae15230a3bc84edd2309ec58c1cfbc407eff5eaa76b90ca87715"} Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.465349 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" podStartSLOduration=1.465332846 podStartE2EDuration="1.465332846s" podCreationTimestamp="2026-02-16 12:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:59.464366326 +0000 UTC m=+366.840715057" watchObservedRunningTime="2026-02-16 12:58:59.465332846 +0000 UTC m=+366.841681567" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.067441 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.073881 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.081311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.082562 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236794 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236854 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236891 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.254633 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.256618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.258709 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.260951 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.339182 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.357949 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.401570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439207 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439365 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.540672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.558229 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.574028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.794645 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: W0216 12:59:01.804413 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d0e942_91bf_460d_9465_2633c1436b2c.slice/crio-2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e WatchSource:0}: Error finding container 2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e: Status 404 returned error can't find the container with id 2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.962229 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:02 crc kubenswrapper[4740]: W0216 12:59:02.027499 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d9eb5f_5eed_4968_beae_0001d2d70d2a.slice/crio-014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9 WatchSource:0}: Error finding container 014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9: Status 404 returned error can't find the container with id 014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464157 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4d0e942-91bf-460d-9465-2633c1436b2c" containerID="4bda33e6f14cffbc36a62c5a996a64f5735a7b2297d687b038ca79c807709233" exitCode=0 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464267 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerDied","Data":"4bda33e6f14cffbc36a62c5a996a64f5735a7b2297d687b038ca79c807709233"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468503 4740 generic.go:334] "Generic (PLEG): container finished" podID="60d9eb5f-5eed-4968-beae-0001d2d70d2a" containerID="edeaa38751ff8dea32d9594156dbb99c077dd0940322d603eef22f1a86631be1" exitCode=0 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468530 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerDied","Data":"edeaa38751ff8dea32d9594156dbb99c077dd0940322d603eef22f1a86631be1"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468572 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerStarted","Data":"014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.651521 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.655140 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.657783 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.658027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856190 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.875975 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.985317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.430716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.477315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921"} Feb 16 12:59:03 crc kubenswrapper[4740]: W0216 12:59:03.484106 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca213d9_ef6f_4240_aa95_fe7f4e2691cf.slice/crio-98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d WatchSource:0}: Error finding container 98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d: Status 404 returned error can't find the container with id 98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.489742 4740 generic.go:334] "Generic (PLEG): container finished" podID="60d9eb5f-5eed-4968-beae-0001d2d70d2a" containerID="6674ac7a76297138bb11087f1caca58071ad3df92ab4c9b530c89adfec966b55" exitCode=0 Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.489784 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerDied","Data":"6674ac7a76297138bb11087f1caca58071ad3df92ab4c9b530c89adfec966b55"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.447738 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.448928 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.450397 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.458685 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.495510 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" exitCode=0 Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.495602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.496469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.498521 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4d0e942-91bf-460d-9465-2633c1436b2c" containerID="55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921" exitCode=0 Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.498636 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerDied","Data":"55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.501199 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerStarted","Data":"2b076548f55d0299dbecf70c82abd75bf9d182507be041a169846555b8f983cc"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.553871 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbn89" podStartSLOduration=1.848472873 podStartE2EDuration="3.553847316s" podCreationTimestamp="2026-02-16 12:59:01 +0000 UTC" firstStartedPulling="2026-02-16 12:59:02.470561821 +0000 UTC m=+369.846910552" lastFinishedPulling="2026-02-16 12:59:04.175936274 +0000 UTC m=+371.552284995" observedRunningTime="2026-02-16 12:59:04.549959353 +0000 UTC m=+371.926308084" watchObservedRunningTime="2026-02-16 12:59:04.553847316 +0000 UTC m=+371.930196037" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.578948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.579035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.579096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680465 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680520 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.681488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.681582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.697209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.766627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.179169 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:05 crc kubenswrapper[4740]: W0216 12:59:05.190080 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ecdfb1a_6379_4a42_a4c7_da582898b1f3.slice/crio-ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185 WatchSource:0}: Error finding container ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185: Status 404 returned error can't find the container with id ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185 Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506067 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ecdfb1a-6379-4a42-a4c7-da582898b1f3" containerID="b9681fc00c3849d4127c0f84c4c14c83ba8027d6a6d308437753c02d5a07fc74" exitCode=0 Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerDied","Data":"b9681fc00c3849d4127c0f84c4c14c83ba8027d6a6d308437753c02d5a07fc74"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506148 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerStarted","Data":"ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.509147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.530162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"6b690e9acb7f917cedd3cbd4afc8d030e9fa6e80bf476388d76e9eaa038374e8"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.569302 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j9d2" podStartSLOduration=2.104386698 podStartE2EDuration="4.569280662s" podCreationTimestamp="2026-02-16 12:59:01 +0000 UTC" firstStartedPulling="2026-02-16 12:59:02.467500174 +0000 UTC m=+369.843848905" lastFinishedPulling="2026-02-16 12:59:04.932394148 +0000 UTC m=+372.308742869" observedRunningTime="2026-02-16 12:59:05.569263232 +0000 UTC m=+372.945611963" watchObservedRunningTime="2026-02-16 12:59:05.569280662 +0000 UTC m=+372.945629383" Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.536596 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ecdfb1a-6379-4a42-a4c7-da582898b1f3" containerID="4ca9c7da352a1b4d8535a2b52f97e4e08077c4d693b6aff48cf3321b712cc493" exitCode=0 Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.536699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerDied","Data":"4ca9c7da352a1b4d8535a2b52f97e4e08077c4d693b6aff48cf3321b712cc493"} Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.540007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.540903 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" exitCode=0 Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.752392 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.752591 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" containerID="cri-o://a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" gracePeriod=30 Feb 16 12:59:06 crc kubenswrapper[4740]: E0216 12:59:06.820425 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-conmon-a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832.scope\": RecentStats: unable to find data in memory cache]" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.139263 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319126 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319306 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.320094 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca" (OuterVolumeSpecName: "client-ca") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.320411 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config" (OuterVolumeSpecName: "config") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.330070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.330111 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p" (OuterVolumeSpecName: "kube-api-access-mzh8p") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "kube-api-access-mzh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420501 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420546 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420563 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420580 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550451 4740 generic.go:334] "Generic (PLEG): container finished" podID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" exitCode=0 Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerDied","Data":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550530 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550551 4740 scope.go:117] "RemoveContainer" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550540 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerDied","Data":"0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae"} Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.587679 4740 scope.go:117] "RemoveContainer" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: E0216 12:59:07.590769 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": container with ID starting with a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832 not found: ID does not exist" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.592632 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} err="failed to get container status \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": rpc error: code = NotFound desc = could not find container \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": container with ID starting with a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832 not found: ID does not exist" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.597411 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.601610 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.152623 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: E0216 12:59:08.153153 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153168 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153289 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.156900 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157093 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157252 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157552 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157781 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.162469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.165415 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.332857 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.332919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.333003 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.333024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.434448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.435984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.436080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.436101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.435920 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.437427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.439870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.461547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.472042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.568800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.571343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerStarted","Data":"6ea241b473697cd30c3c1ea849055a0cb8c3fb831551d7949b72f509bb45df94"} Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.587233 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-czkjl" podStartSLOduration=3.486601856 podStartE2EDuration="6.587217147s" podCreationTimestamp="2026-02-16 12:59:02 +0000 UTC" firstStartedPulling="2026-02-16 12:59:04.497720367 +0000 UTC m=+371.874069088" lastFinishedPulling="2026-02-16 12:59:07.598335658 +0000 UTC m=+374.974684379" observedRunningTime="2026-02-16 12:59:08.585269066 +0000 UTC m=+375.961617787" watchObservedRunningTime="2026-02-16 12:59:08.587217147 +0000 UTC m=+375.963565868" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.604992 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lv7b8" podStartSLOduration=2.459126729 podStartE2EDuration="4.604974607s" podCreationTimestamp="2026-02-16 12:59:04 +0000 UTC" firstStartedPulling="2026-02-16 12:59:05.50736132 +0000 UTC m=+372.883710041" lastFinishedPulling="2026-02-16 12:59:07.653209198 +0000 UTC m=+375.029557919" observedRunningTime="2026-02-16 12:59:08.603940844 +0000 UTC m=+375.980289575" watchObservedRunningTime="2026-02-16 12:59:08.604974607 +0000 UTC m=+375.981323328" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.888184 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: W0216 12:59:08.897023 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf55ec5c_7923_476c_aaaf_722391d7d31d.slice/crio-267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef WatchSource:0}: Error finding container 267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef: Status 404 returned error can't find the container with id 267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.287826 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" path="/var/lib/kubelet/pods/5aaac701-1db9-48c2-9f15-61080e1c6389/volumes" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" event={"ID":"df55ec5c-7923-476c-aaaf-722391d7d31d","Type":"ContainerStarted","Data":"8d93c3460ca29ef4cb976e1256043952d5580230008ff267f9c8add36b3f0eaa"} Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" event={"ID":"df55ec5c-7923-476c-aaaf-722391d7d31d","Type":"ContainerStarted","Data":"267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef"} Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578575 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.583607 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.598958 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" podStartSLOduration=3.598936317 podStartE2EDuration="3.598936317s" podCreationTimestamp="2026-02-16 12:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:59:09.594760115 +0000 UTC m=+376.971108856" watchObservedRunningTime="2026-02-16 12:59:09.598936317 +0000 UTC m=+376.975285048" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.402310 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.402940 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.448456 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.575481 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.575559 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.681524 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.697152 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.742617 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:12.985738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:12.986059 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:13.061371 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:13.674569 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.766946 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.766998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.810396 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.575743 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.576210 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.576285 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.577457 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.577581 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" gracePeriod=600 Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.687942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.630575 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" exitCode=0 Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.630715 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.631513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.631546 4740 scope.go:117] "RemoveContainer" containerID="2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" Feb 16 12:59:18 crc kubenswrapper[4740]: I0216 12:59:18.770603 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:59:18 crc kubenswrapper[4740]: I0216 12:59:18.841530 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:43 crc kubenswrapper[4740]: I0216 12:59:43.897755 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" containerID="cri-o://bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" gracePeriod=30 Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.275876 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366639 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366678 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366715 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366753 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.367754 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.368199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373310 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373588 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373976 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd" (OuterVolumeSpecName: "kube-api-access-vr6wd") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "kube-api-access-vr6wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.375451 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.375783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.384934 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468009 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468050 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468065 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468078 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468089 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468101 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468113 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.806919 4740 generic.go:334] "Generic (PLEG): container finished" podID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" exitCode=0 Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerDied","Data":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807728 4740 scope.go:117] "RemoveContainer" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerDied","Data":"1104556d5cde5c0aa4a407502225880f615d1c9eedcf19e3ada6ce6e63d3b266"} Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.830967 4740 scope.go:117] "RemoveContainer" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: E0216 12:59:44.833790 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": container with ID starting with bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da not found: ID does not exist" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.833983 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} err="failed to get container status \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": rpc error: code = NotFound desc = could not find container \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": container with ID starting with bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da not found: ID does not exist" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.850173 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.856511 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:45 crc kubenswrapper[4740]: I0216 12:59:45.286130 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" path="/var/lib/kubelet/pods/56fbd3c7-a514-479c-9b0f-1cdb3025cae6/volumes" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.183225 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:00 crc kubenswrapper[4740]: E0216 13:00:00.183943 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.183957 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.184059 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.184428 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.186543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.187004 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.197996 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.271922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.272025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.272049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373241 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.374102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.380122 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.391898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.501632 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.936042 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906264 4740 generic.go:334] "Generic (PLEG): container finished" podID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerID="db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405" exitCode=0 Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906432 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerDied","Data":"db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405"} Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerStarted","Data":"290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e"} Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.150634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218210 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.219159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.224449 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.225438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg" (OuterVolumeSpecName: "kube-api-access-5jfmg") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "kube-api-access-5jfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319843 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319896 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319914 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerDied","Data":"290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e"} Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919086 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919093 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:01:15 crc kubenswrapper[4740]: I0216 13:01:15.575173 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:01:15 crc kubenswrapper[4740]: I0216 13:01:15.576028 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:01:45 crc kubenswrapper[4740]: I0216 13:01:45.574930 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:01:45 crc kubenswrapper[4740]: I0216 13:01:45.575489 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.575428 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576037 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576088 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576766 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576853 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" gracePeriod=600 Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.668236 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" exitCode=0 Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.668284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.669028 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.669059 4740 scope.go:117] "RemoveContainer" containerID="b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.629262 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:09 crc kubenswrapper[4740]: E0216 13:03:09.630103 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630121 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630247 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.632620 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.632719 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2k78b" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.633841 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.636564 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.637395 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.638887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bgvh8" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.653342 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.670383 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.671170 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.674189 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c82xf" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.676045 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.683149 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886625 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.906672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.908391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.911083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.962312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.970006 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.990493 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.373783 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.383650 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.417626 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.420302 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:10 crc kubenswrapper[4740]: W0216 13:03:10.422656 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeeada69_65c5_434a_af02_8e6b23e13138.slice/crio-ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de WatchSource:0}: Error finding container ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de: Status 404 returned error can't find the container with id ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de Feb 16 13:03:10 crc kubenswrapper[4740]: W0216 13:03:10.427003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda68020b3_17ff_43dc_b17d_0845940c0758.slice/crio-3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3 WatchSource:0}: Error finding container 3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3: Status 404 returned error can't find the container with id 3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3 Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.997805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" event={"ID":"beeada69-65c5-434a-af02-8e6b23e13138","Type":"ContainerStarted","Data":"ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de"} Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.999077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kflg5" event={"ID":"8b35e0e1-44f6-4481-a71e-98e3f8462bb7","Type":"ContainerStarted","Data":"f88db5297ec631ac828a43f4dcb16e8b31b6299d219ee16e91a4ed0eac3a62cd"} Feb 16 13:03:11 crc kubenswrapper[4740]: I0216 13:03:11.000108 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" event={"ID":"a68020b3-17ff-43dc-b17d-0845940c0758","Type":"ContainerStarted","Data":"3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.017559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kflg5" event={"ID":"8b35e0e1-44f6-4481-a71e-98e3f8462bb7","Type":"ContainerStarted","Data":"557e212c7284a8e868eb014f3db352425d2b82aaf15788cbfe05682a7e9cf678"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.019386 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" event={"ID":"a68020b3-17ff-43dc-b17d-0845940c0758","Type":"ContainerStarted","Data":"b61fe2893ec93647eb8b9b0b95599eab9997ea713d93302e8ee7d81b46ddceb2"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.019526 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.034017 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kflg5" podStartSLOduration=2.213561926 podStartE2EDuration="5.033997481s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.383247705 +0000 UTC m=+617.759596436" lastFinishedPulling="2026-02-16 13:03:13.20368327 +0000 UTC m=+620.580031991" observedRunningTime="2026-02-16 13:03:14.03214425 +0000 UTC m=+621.408492971" watchObservedRunningTime="2026-02-16 13:03:14.033997481 +0000 UTC m=+621.410346212" Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.057991 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" podStartSLOduration=2.274933787 podStartE2EDuration="5.057949742s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.429257632 +0000 UTC m=+617.805606343" lastFinishedPulling="2026-02-16 13:03:13.212273577 +0000 UTC m=+620.588622298" observedRunningTime="2026-02-16 13:03:14.053222043 +0000 UTC m=+621.429570764" watchObservedRunningTime="2026-02-16 13:03:14.057949742 +0000 UTC m=+621.434298473" Feb 16 13:03:15 crc kubenswrapper[4740]: I0216 13:03:15.030194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" event={"ID":"beeada69-65c5-434a-af02-8e6b23e13138","Type":"ContainerStarted","Data":"7d5abc7a16c492df2a5eaa32a803f4d79c5729f029f7a4c6618715209a325da3"} Feb 16 13:03:15 crc kubenswrapper[4740]: I0216 13:03:15.045482 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" podStartSLOduration=2.512772826 podStartE2EDuration="6.045464191s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.425088502 +0000 UTC m=+617.801437223" lastFinishedPulling="2026-02-16 13:03:13.957779867 +0000 UTC m=+621.334128588" observedRunningTime="2026-02-16 13:03:15.043282139 +0000 UTC m=+622.419630860" watchObservedRunningTime="2026-02-16 13:03:15.045464191 +0000 UTC m=+622.421812912" Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.615266 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620019 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" containerID="cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620199 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" containerID="cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620400 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" containerID="cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620456 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" containerID="cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620524 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" containerID="cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620415 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" containerID="cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620936 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.663411 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" containerID="cri-o://0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: E0216 13:03:19.949609 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.995790 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.060305 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061120 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061169 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" exitCode=2 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061303 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.062021 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.062242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.068190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.071368 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.071984 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072944 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072983 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072998 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073010 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073021 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073034 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073044 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" exitCode=143 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073054 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" exitCode=143 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073128 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.117952 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.314547 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.315122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.315536 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379386 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rglx7"] Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379847 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379878 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379897 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379909 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379923 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379934 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379946 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379957 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379984 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380002 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380025 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380039 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380049 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380072 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380086 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380095 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380112 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380123 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380136 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kubecfg-setup" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380147 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kubecfg-setup" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380169 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380180 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380345 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380359 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380374 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380393 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380408 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380423 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380433 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380444 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380471 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380481 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380596 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380607 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380731 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.384661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430333 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430355 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430387 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430418 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430477 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430502 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430535 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430568 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430606 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430645 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430687 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430711 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430905 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430952 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431068 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket" (OuterVolumeSpecName: "log-socket") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431020 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log" (OuterVolumeSpecName: "node-log") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431054 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431130 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431182 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash" (OuterVolumeSpecName: "host-slash") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431276 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431310 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431427 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431524 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431542 4740 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431554 4740 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431567 4740 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431581 4740 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431596 4740 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431581 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431608 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431621 4740 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431727 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431744 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431887 4740 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431927 4740 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431944 4740 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431957 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.432066 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.436184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w" (OuterVolumeSpecName: "kube-api-access-rml5w") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "kube-api-access-rml5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.436592 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.444318 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533646 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533694 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533713 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533779 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533966 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534171 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534257 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534357 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534370 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534382 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534391 4740 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534401 4740 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534411 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534423 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635764 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635925 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636077 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636104 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636103 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636219 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636226 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636381 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636455 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636515 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.639170 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.655142 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.698707 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.083574 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086493 4740 generic.go:334] "Generic (PLEG): container finished" podID="821af362-e357-43e9-86e5-259cef9b4a63" containerID="6640a5f734cd430cdfd59b39aab5a672ae67e177c3184774f33cb2d3d1771d6a" exitCode=0 Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerDied","Data":"6640a5f734cd430cdfd59b39aab5a672ae67e177c3184774f33cb2d3d1771d6a"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"6dbf36e35751a26aa9ae9b347ef9788d0cd2eff5d34cf538a43699aafb28e9cd"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.092473 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093471 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"eb66f3d2b37f21fe7aa111a136026ce7eb2cec2307821fe7b198f1e6beb272ce"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093510 4740 scope.go:117] "RemoveContainer" containerID="0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093627 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.122070 4740 scope.go:117] "RemoveContainer" containerID="85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.151000 4740 scope.go:117] "RemoveContainer" containerID="845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.152313 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.155795 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.170220 4740 scope.go:117] "RemoveContainer" containerID="459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.193929 4740 scope.go:117] "RemoveContainer" containerID="1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.219889 4740 scope.go:117] "RemoveContainer" containerID="9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.236925 4740 scope.go:117] "RemoveContainer" containerID="f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.255488 4740 scope.go:117] "RemoveContainer" containerID="db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.275085 4740 scope.go:117] "RemoveContainer" containerID="aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.288906 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" path="/var/lib/kubelet/pods/4734b9dd-f672-4895-86b3-538d9012af9f/volumes" Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.104566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"3c822ef5edc784c2ed122b83091dcc838535d1f2ea973cfcebe6333e909032a7"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"0b2cf127fc76d5f211a82c80631be91aa8fed66cd8d416bce0603ccf11a098bb"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"1b599c8df447c38b19dee6f7763be37b308a6bea8f6131e02de6065a460c56d4"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"780ba962a9087e0d5489e81dd729f5b9f3268de2902452e5e1cbf4e5a1abf2be"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105300 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"4015b96cbdd41097647f4ac64dbcc061ea2e3211358b63803d279d32e84865f4"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105317 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"1c2a34fb4883ba1ef50823773f6fb33e2dddf0a0c5997c273d5ef93eeaf706e8"} Feb 16 13:03:25 crc kubenswrapper[4740]: I0216 13:03:25.125935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"8eebd8d9e91db7b57983954a1dcd5eb9296a588096c359e650b022f62936a0bf"} Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.145951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"c6e7305ecf76c7668b1b225e0e04e911c6136fa17328d03b9aee282db663e924"} Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146500 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146544 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146554 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.170881 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.182602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" podStartSLOduration=7.182579109 podStartE2EDuration="7.182579109s" podCreationTimestamp="2026-02-16 13:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:03:27.179572699 +0000 UTC m=+634.555921420" watchObservedRunningTime="2026-02-16 13:03:27.182579109 +0000 UTC m=+634.558927830" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.191091 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:31 crc kubenswrapper[4740]: I0216 13:03:31.281317 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:31 crc kubenswrapper[4740]: E0216 13:03:31.282010 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 13:03:45 crc kubenswrapper[4740]: I0216 13:03:45.281243 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:46 crc kubenswrapper[4740]: I0216 13:03:46.267204 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:46 crc kubenswrapper[4740]: I0216 13:03:46.267838 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"97efa07b84aaee645f78c1f71ef09129a178f2d4e53e1afc73affcf68d389413"} Feb 16 13:03:50 crc kubenswrapper[4740]: I0216 13:03:50.724344 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.527632 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.530513 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.532731 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.536009 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609335 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609393 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710746 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.711459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.742582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.850410 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.032702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:58 crc kubenswrapper[4740]: W0216 13:03:58.044184 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e36b7f7_a888_4da4_a510_deafe9588b20.slice/crio-7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1 WatchSource:0}: Error finding container 7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1: Status 404 returned error can't find the container with id 7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1 Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.446058 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerStarted","Data":"03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3"} Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.446133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerStarted","Data":"7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1"} Feb 16 13:03:59 crc kubenswrapper[4740]: I0216 13:03:59.451633 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3" exitCode=0 Feb 16 13:03:59 crc kubenswrapper[4740]: I0216 13:03:59.451674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3"} Feb 16 13:04:01 crc kubenswrapper[4740]: I0216 13:04:01.467903 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="922f3a2873a321b2c72d72f908bfd984a86e9d1eb495494228363388d2e29678" exitCode=0 Feb 16 13:04:01 crc kubenswrapper[4740]: I0216 13:04:01.468090 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"922f3a2873a321b2c72d72f908bfd984a86e9d1eb495494228363388d2e29678"} Feb 16 13:04:02 crc kubenswrapper[4740]: I0216 13:04:02.477326 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="1f3a8e935017b1d1943e3f7c5358a0c1111f6c1f96fbbbbbac3d28e72fb51f3d" exitCode=0 Feb 16 13:04:02 crc kubenswrapper[4740]: I0216 13:04:02.477385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"1f3a8e935017b1d1943e3f7c5358a0c1111f6c1f96fbbbbbac3d28e72fb51f3d"} Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.720237 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.783847 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.783937 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.784604 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle" (OuterVolumeSpecName: "bundle") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.784990 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.785326 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.789428 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd" (OuterVolumeSpecName: "kube-api-access-465qd") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "kube-api-access-465qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.885914 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.945186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util" (OuterVolumeSpecName: "util") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.987087 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493641 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1"} Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493685 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1" Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493701 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.069667 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070303 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="pull" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070321 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="pull" Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070334 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="util" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070341 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="util" Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070357 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070364 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070477 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.074524 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.074887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-56rgw" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.075509 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.087457 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.254112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.355544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.374640 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.395515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.598131 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:10 crc kubenswrapper[4740]: I0216 13:04:10.528353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" event={"ID":"afdcb81a-db2a-4c04-b73b-30facf2d10af","Type":"ContainerStarted","Data":"f059097902024f8242f4fd49cdb4b266f51749ca504b0ce23c35ea5cec4b476f"} Feb 16 13:04:12 crc kubenswrapper[4740]: I0216 13:04:12.539596 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" event={"ID":"afdcb81a-db2a-4c04-b73b-30facf2d10af","Type":"ContainerStarted","Data":"cea357629bf18f5604596868e8b4353bceecb56b373d61ebe68ec2c4a9831df4"} Feb 16 13:04:15 crc kubenswrapper[4740]: I0216 13:04:15.575143 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:04:15 crc kubenswrapper[4740]: I0216 13:04:15.575564 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.844264 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" podStartSLOduration=6.9764121 podStartE2EDuration="8.844248009s" podCreationTimestamp="2026-02-16 13:04:09 +0000 UTC" firstStartedPulling="2026-02-16 13:04:09.615758166 +0000 UTC m=+676.992106877" lastFinishedPulling="2026-02-16 13:04:11.483594065 +0000 UTC m=+678.859942786" observedRunningTime="2026-02-16 13:04:12.553557178 +0000 UTC m=+679.929905909" watchObservedRunningTime="2026-02-16 13:04:17.844248009 +0000 UTC m=+685.220596730" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.846189 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.846985 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.849060 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfmc4" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.855950 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.857131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.858754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.866073 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.877561 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.886722 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v88gn"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.887419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965141 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.982420 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.983116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986091 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cnlh8" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986409 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.998378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067273 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.083427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.084185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.086864 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.168780 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.168957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169176 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.176617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.183280 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.184282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.200979 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.201760 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.210590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.214626 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270696 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270766 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.272456 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.282589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.291290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.297036 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371749 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371770 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371831 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371858 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.410770 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:18 crc kubenswrapper[4740]: W0216 13:04:18.418568 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7ffd056_af44_4007_8de6_cc707902d4c4.slice/crio-a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e WatchSource:0}: Error finding container a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e: Status 404 returned error can't find the container with id a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.448679 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474775 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.475539 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.478943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.479578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.498906 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.507496 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:18 crc kubenswrapper[4740]: W0216 13:04:18.512176 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedcdba40_6318_4d29_a235_829e94bc8089.slice/crio-038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a WatchSource:0}: Error finding container 038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a: Status 404 returned error can't find the container with id 038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.554268 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.571628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v88gn" event={"ID":"3c0ee084-492b-46da-82b3-9c9a8e1715fd","Type":"ContainerStarted","Data":"4965bbe93b5792a12256e17dfd65b00618ca5c089bfc38a19dec37712ee87d4c"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.572693 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" event={"ID":"b7ffd056-af44-4007-8de6-cc707902d4c4","Type":"ContainerStarted","Data":"a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.573418 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" event={"ID":"edcdba40-6318-4d29-a235-829e94bc8089","Type":"ContainerStarted","Data":"038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.574453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"dd28d0ee016e225ca7e94bb019049e46e5c942ec5dc794f21927a50e28981eb9"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.715067 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.581878 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-cxtv2" event={"ID":"f22e5359-95ad-4163-8f93-88353190b805","Type":"ContainerStarted","Data":"e0f7746addb599768b22590261559cf5667a05189a669384707bae343aabdc38"} Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.582220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-cxtv2" event={"ID":"f22e5359-95ad-4163-8f93-88353190b805","Type":"ContainerStarted","Data":"909dfc8245a0d1a1655cd87968208d3e76c879417d61d66e9cc323af1ee2b5e2"} Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.608097 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dcb9588c-cxtv2" podStartSLOduration=1.608073628 podStartE2EDuration="1.608073628s" podCreationTimestamp="2026-02-16 13:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:04:19.604174221 +0000 UTC m=+686.980522972" watchObservedRunningTime="2026-02-16 13:04:19.608073628 +0000 UTC m=+686.984422349" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.594274 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v88gn" event={"ID":"3c0ee084-492b-46da-82b3-9c9a8e1715fd","Type":"ContainerStarted","Data":"3aee1cd43e3a6cc6060fd671a21f7b15a2286d84ebe4c5bc5055c0a1630177f5"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.594880 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.598081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" event={"ID":"b7ffd056-af44-4007-8de6-cc707902d4c4","Type":"ContainerStarted","Data":"d17a8aea7361581a915e6fb6854d05a7cce92a74b1e26cff948fbc8ec764c904"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.598296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.600413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" event={"ID":"edcdba40-6318-4d29-a235-829e94bc8089","Type":"ContainerStarted","Data":"7c4d9f98ef1c1089cbee1931cb50b4becee4db3519f6739aeb6a86707d92bc32"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.602686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"96c4ca7426706ccd2b5c167f7da729785d3f75f23ef7537a92217bf24880f9e4"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.616081 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v88gn" podStartSLOduration=1.711942467 podStartE2EDuration="4.616063195s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.297432332 +0000 UTC m=+685.673781053" lastFinishedPulling="2026-02-16 13:04:21.20155304 +0000 UTC m=+688.577901781" observedRunningTime="2026-02-16 13:04:21.612073415 +0000 UTC m=+688.988422136" watchObservedRunningTime="2026-02-16 13:04:21.616063195 +0000 UTC m=+688.992411916" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.629539 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" podStartSLOduration=1.835698286 podStartE2EDuration="4.629522166s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.421292185 +0000 UTC m=+685.797640906" lastFinishedPulling="2026-02-16 13:04:21.215116035 +0000 UTC m=+688.591464786" observedRunningTime="2026-02-16 13:04:21.626336162 +0000 UTC m=+689.002684893" watchObservedRunningTime="2026-02-16 13:04:21.629522166 +0000 UTC m=+689.005870887" Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.306308 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" podStartSLOduration=3.626233031 podStartE2EDuration="6.30629002s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.514318296 +0000 UTC m=+685.890667017" lastFinishedPulling="2026-02-16 13:04:21.194375245 +0000 UTC m=+688.570724006" observedRunningTime="2026-02-16 13:04:21.643139243 +0000 UTC m=+689.019487964" watchObservedRunningTime="2026-02-16 13:04:23.30629002 +0000 UTC m=+690.682638741" Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.616191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"85f883a5f53bf9f7e69be0fc35a9513ead9b09563524871a9a8c28f816dd3dcb"} Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.632509 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" podStartSLOduration=1.616540968 podStartE2EDuration="6.632485179s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.457282755 +0000 UTC m=+685.833631476" lastFinishedPulling="2026-02-16 13:04:23.473226966 +0000 UTC m=+690.849575687" observedRunningTime="2026-02-16 13:04:23.629269453 +0000 UTC m=+691.005618194" watchObservedRunningTime="2026-02-16 13:04:23.632485179 +0000 UTC m=+691.008833910" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.237568 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.555193 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.555262 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.563009 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.645036 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.695444 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:38 crc kubenswrapper[4740]: I0216 13:04:38.193798 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:45 crc kubenswrapper[4740]: I0216 13:04:45.575434 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:04:45 crc kubenswrapper[4740]: I0216 13:04:45.576559 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.113908 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.118804 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.121319 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.127397 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230755 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230870 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333027 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.334169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.354872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.440763 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.896226 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783301 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="dd0de84e56c9255236e11b685dd1fb2353e37f74f2d50abeddb8e9ac3617840b" exitCode=0 Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"dd0de84e56c9255236e11b685dd1fb2353e37f74f2d50abeddb8e9ac3617840b"} Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerStarted","Data":"38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116"} Feb 16 13:04:53 crc kubenswrapper[4740]: I0216 13:04:53.518382 4740 scope.go:117] "RemoveContainer" containerID="0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" Feb 16 13:04:53 crc kubenswrapper[4740]: I0216 13:04:53.761432 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" containerID="cri-o://ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" gracePeriod=15 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.188862 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gctsd_adc3a749-7453-4afe-ba48-f34188be4832/console/0.log" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.189188 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268790 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268898 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268926 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269028 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269061 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca" (OuterVolumeSpecName: "service-ca") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269875 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.270130 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config" (OuterVolumeSpecName: "console-config") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.274916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.275344 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp" (OuterVolumeSpecName: "kube-api-access-2sslp") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "kube-api-access-2sslp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.277434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.371488 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.371864 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372050 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372263 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372389 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372591 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372861 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797033 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gctsd_adc3a749-7453-4afe-ba48-f34188be4832/console/0.log" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797349 4740 generic.go:334] "Generic (PLEG): container finished" podID="adc3a749-7453-4afe-ba48-f34188be4832" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" exitCode=2 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797432 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerDied","Data":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797495 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerDied","Data":"531ee6088e028abeb40db4014fff58f47925cdba0b3674ddf9755268d1aa83d4"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797520 4740 scope.go:117] "RemoveContainer" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797432 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.804477 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="9b44d3ac65230daec50392ab807602372c0a2c24b01d13ecd8738da87cac0fc1" exitCode=0 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.804516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"9b44d3ac65230daec50392ab807602372c0a2c24b01d13ecd8738da87cac0fc1"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.826764 4740 scope.go:117] "RemoveContainer" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: E0216 13:04:54.827308 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": container with ID starting with ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1 not found: ID does not exist" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.827592 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} err="failed to get container status \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": rpc error: code = NotFound desc = could not find container \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": container with ID starting with ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1 not found: ID does not exist" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.839383 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.842688 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.295726 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc3a749-7453-4afe-ba48-f34188be4832" path="/var/lib/kubelet/pods/adc3a749-7453-4afe-ba48-f34188be4832/volumes" Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.814500 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="50606fcd420271c285464e348b53f8c669b10842110551df296340c5e230fea3" exitCode=0 Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.814553 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"50606fcd420271c285464e348b53f8c669b10842110551df296340c5e230fea3"} Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.060763 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213179 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213257 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.215076 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle" (OuterVolumeSpecName: "bundle") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.222378 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9" (OuterVolumeSpecName: "kube-api-access-hd7w9") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "kube-api-access-hd7w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.233987 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util" (OuterVolumeSpecName: "util") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315209 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315244 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315255 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116"} Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829409 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829433 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500768 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="pull" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500783 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="pull" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500793 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="util" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500802 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="util" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500843 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500854 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500868 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500876 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501009 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501022 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501488 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.504921 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5j6l9" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.505082 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.505093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.508968 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.509139 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.522026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623752 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.747019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.747019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.751585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.824870 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.022096 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.023230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.025509 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.027871 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.028311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jl8tw" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.043020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130287 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231958 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.252477 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.253458 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.255965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.341938 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:07 crc kubenswrapper[4740]: W0216 13:05:07.345519 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f25eec_68aa_4b48_b40a_08ce0599d525.slice/crio-1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40 WatchSource:0}: Error finding container 1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40: Status 404 returned error can't find the container with id 1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40 Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.355760 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.580409 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.893733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" event={"ID":"4163a038-60ca-4e8e-bf45-028b04101fc9","Type":"ContainerStarted","Data":"efeffddba81eb0380a0f3f7ef629ee7bb3ee73d08afc3a747fb393664af4e60c"} Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.895219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" event={"ID":"97f25eec-68aa-4b48-b40a-08ce0599d525","Type":"ContainerStarted","Data":"1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.927795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" event={"ID":"97f25eec-68aa-4b48-b40a-08ce0599d525","Type":"ContainerStarted","Data":"3679805d8c4ed93ffb7101358dbe3148cc0ee68f61f7cb173f24187bebd08b49"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.928420 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.929164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" event={"ID":"4163a038-60ca-4e8e-bf45-028b04101fc9","Type":"ContainerStarted","Data":"d3e825656c314ee604b066ac825937e92ee12c7c0ab540fa53c7dcabc5aac78f"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.929635 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.943854 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" podStartSLOduration=1.653819235 podStartE2EDuration="6.943840192s" podCreationTimestamp="2026-02-16 13:05:06 +0000 UTC" firstStartedPulling="2026-02-16 13:05:07.348868532 +0000 UTC m=+734.725217263" lastFinishedPulling="2026-02-16 13:05:12.638889499 +0000 UTC m=+740.015238220" observedRunningTime="2026-02-16 13:05:12.943605804 +0000 UTC m=+740.319954525" watchObservedRunningTime="2026-02-16 13:05:12.943840192 +0000 UTC m=+740.320188913" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.967211 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" podStartSLOduration=0.916660809 podStartE2EDuration="5.967196007s" podCreationTimestamp="2026-02-16 13:05:07 +0000 UTC" firstStartedPulling="2026-02-16 13:05:07.593359482 +0000 UTC m=+734.969708203" lastFinishedPulling="2026-02-16 13:05:12.64389468 +0000 UTC m=+740.020243401" observedRunningTime="2026-02-16 13:05:12.963933191 +0000 UTC m=+740.340281932" watchObservedRunningTime="2026-02-16 13:05:12.967196007 +0000 UTC m=+740.343544728" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575108 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575497 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575546 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.576447 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.576530 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" gracePeriod=600 Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947099 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" exitCode=0 Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947536 4740 scope.go:117] "RemoveContainer" containerID="681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" Feb 16 13:05:27 crc kubenswrapper[4740]: I0216 13:05:27.363894 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:32 crc kubenswrapper[4740]: I0216 13:05:32.201425 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 13:05:46 crc kubenswrapper[4740]: I0216 13:05:46.828356 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.547381 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-frlcd"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.550046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.551709 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.552289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gnkqv" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.552329 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.558286 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.558957 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.561028 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.582450 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.635659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ffcm2"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.637570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640599 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640917 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.641090 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2dwkl" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659386 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.660419 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.661746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.664460 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.675634 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761636 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761865 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761977 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.763319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.768062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.768615 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.777761 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.781890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863401 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863475 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist podName:05937f4c-8149-4db8-bb5e-e863ae011d92 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:48.363457273 +0000 UTC m=+775.739805994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist") pod "speaker-ffcm2" (UID: "05937f4c-8149-4db8-bb5e-e863ae011d92") : secret "metallb-memberlist" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863663 4740 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863692 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs podName:e9790ca2-5f44-4c39-a31f-13dc607ab7c4 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:48.36368445 +0000 UTC m=+775.740033171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs") pod "controller-69bbfbf88f-kfv4h" (UID: "e9790ca2-5f44-4c39-a31f-13dc607ab7c4") : secret "controller-certs-secret" not found Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.864256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.866046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.868531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.874159 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.880302 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.883225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.891148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.092270 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.160027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"e274bab74a1626c3b63a6777781609180502777996f79290ada0d89c8ee33d0a"} Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.172954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" event={"ID":"2e220608-2271-4260-bc94-e4d206c718d4","Type":"ContainerStarted","Data":"2430562d3e2cc186a63cd1c2eedaff020e1522b1b01e6f2b9b040be74d4bf7ea"} Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.372926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.372988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: E0216 13:05:48.374001 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:05:48 crc kubenswrapper[4740]: E0216 13:05:48.374080 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist podName:05937f4c-8149-4db8-bb5e-e863ae011d92 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:49.37406064 +0000 UTC m=+776.750409361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist") pod "speaker-ffcm2" (UID: "05937f4c-8149-4db8-bb5e-e863ae011d92") : secret "metallb-memberlist" not found Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.379968 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.583338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.977101 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:48 crc kubenswrapper[4740]: W0216 13:05:48.980651 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9790ca2_5f44_4c39_a31f_13dc607ab7c4.slice/crio-360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59 WatchSource:0}: Error finding container 360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59: Status 404 returned error can't find the container with id 360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59 Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.180770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"ea6b634412a715ccba5cfb746d81f8bec8e13d19011d076a5cd37ff0a5afb482"} Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.180841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59"} Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.385970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.392242 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.457018 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: W0216 13:05:49.482594 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05937f4c_8149_4db8_bb5e_e863ae011d92.slice/crio-6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf WatchSource:0}: Error finding container 6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf: Status 404 returned error can't find the container with id 6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.450628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"57d3f5806bf8034d7bb2886f40e34b78cb29cea3543d9e2ae8c8eee8cf2fc742"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.450681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.456076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"9a3105b3cef26fe7e89a716049a5f4b8d0ba5a1e1753252511fa2695f07b1eb3"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.456231 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.479476 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-kfv4h" podStartSLOduration=3.479455648 podStartE2EDuration="3.479455648s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:05:50.473563067 +0000 UTC m=+777.849911788" watchObservedRunningTime="2026-02-16 13:05:50.479455648 +0000 UTC m=+777.855804379" Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.471588 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"2b8d5bfb2f4a6f5b9af31db37b4cd82053b2f220c4d5059d86768f201f1836ec"} Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.471943 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.493433 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ffcm2" podStartSLOduration=4.49341574 podStartE2EDuration="4.49341574s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:05:51.492261633 +0000 UTC m=+778.868610354" watchObservedRunningTime="2026-02-16 13:05:51.49341574 +0000 UTC m=+778.869764451" Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.499140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" event={"ID":"2e220608-2271-4260-bc94-e4d206c718d4","Type":"ContainerStarted","Data":"f37d8f01be7f186d0cfd1ec356f9faa69031e4d1c36d5eda02dc2c3176c47bf1"} Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.500319 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.502256 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="f6b7f2324310e20c93c7efce5dad368774d8c3e1361cdf1d57c1e9b44abf9204" exitCode=0 Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.502293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"f6b7f2324310e20c93c7efce5dad368774d8c3e1361cdf1d57c1e9b44abf9204"} Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.521596 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" podStartSLOduration=1.8519845080000001 podStartE2EDuration="9.521574785s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="2026-02-16 13:05:48.09738157 +0000 UTC m=+775.473730291" lastFinishedPulling="2026-02-16 13:05:55.766971847 +0000 UTC m=+783.143320568" observedRunningTime="2026-02-16 13:05:56.515142186 +0000 UTC m=+783.891490907" watchObservedRunningTime="2026-02-16 13:05:56.521574785 +0000 UTC m=+783.897923506" Feb 16 13:05:57 crc kubenswrapper[4740]: I0216 13:05:57.511444 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="7ab45c16a678070f6b9d23152f7761310eb42fda5b420d8113d3b14a1fe146f7" exitCode=0 Feb 16 13:05:57 crc kubenswrapper[4740]: I0216 13:05:57.511536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"7ab45c16a678070f6b9d23152f7761310eb42fda5b420d8113d3b14a1fe146f7"} Feb 16 13:05:58 crc kubenswrapper[4740]: I0216 13:05:58.518685 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="7470b973696ac614c3457ef3f8faa678bf8c43243a0bcc76f86edc9a4bb7266e" exitCode=0 Feb 16 13:05:58 crc kubenswrapper[4740]: I0216 13:05:58.518768 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"7470b973696ac614c3457ef3f8faa678bf8c43243a0bcc76f86edc9a4bb7266e"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.462327 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527645 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"737bb3575eb39630038ffab597d1ac234e9d4edd2bbb3436450651d4121200b2"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"7f45de26383b0bc8823ec1b95e9d0597f5341d353628a7f3e8ef5c33698e9c30"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527691 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"cad00b7671692e43e7c071ff9c7f0f0c1d003fb5da10f1db712ca5c80d766660"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"b8d1dbfd6b3ab2bef22a57a710d68438ffd7197884828900c1ea356d3a83d4c4"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"6c1f2a78b7fe8008e61aca528fe3ef3de733507f7757166e186ad7c611d24cd6"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"8c83655ef2e72ce020be45b37b01e16ff04c8efcee2a2ad7426ab57ff39b093c"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527830 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.560398 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-frlcd" podStartSLOduration=4.852765442 podStartE2EDuration="12.560372523s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="2026-02-16 13:05:48.03422845 +0000 UTC m=+775.410577171" lastFinishedPulling="2026-02-16 13:05:55.741835531 +0000 UTC m=+783.118184252" observedRunningTime="2026-02-16 13:05:59.548559829 +0000 UTC m=+786.924908560" watchObservedRunningTime="2026-02-16 13:05:59.560372523 +0000 UTC m=+786.936721284" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.035784 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.037080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.041322 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.041379 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.042681 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gbz6w" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.053188 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.094086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.196173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.227121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.367258 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.768264 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.875407 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.915411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:03 crc kubenswrapper[4740]: I0216 13:06:03.556299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerStarted","Data":"d61ae1b6020950c900af7ee3d49ade2d0b0882e3bedf8ab6d6113b5c5284ba1c"} Feb 16 13:06:05 crc kubenswrapper[4740]: I0216 13:06:05.419884 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.026205 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.027419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.035937 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.047399 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.149141 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.170575 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.347898 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.572518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerStarted","Data":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.572648 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vgwdx" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" containerID="cri-o://901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" gracePeriod=2 Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.595658 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vgwdx" podStartSLOduration=1.8437921099999999 podStartE2EDuration="4.595638264s" podCreationTimestamp="2026-02-16 13:06:02 +0000 UTC" firstStartedPulling="2026-02-16 13:06:02.778063682 +0000 UTC m=+790.154412403" lastFinishedPulling="2026-02-16 13:06:05.529909826 +0000 UTC m=+792.906258557" observedRunningTime="2026-02-16 13:06:06.588838682 +0000 UTC m=+793.965187403" watchObservedRunningTime="2026-02-16 13:06:06.595638264 +0000 UTC m=+793.971986985" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.766652 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: W0216 13:06:06.773262 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fe65e33_ae2e_4f40_b686_454192d6b538.slice/crio-7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa WatchSource:0}: Error finding container 7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa: Status 404 returned error can't find the container with id 7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.041003 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.072332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.077667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm" (OuterVolumeSpecName: "kube-api-access-j2wdm") pod "1d07676e-d3a5-489e-bd5a-61d7a59a039b" (UID: "1d07676e-d3a5-489e-bd5a-61d7a59a039b"). InnerVolumeSpecName "kube-api-access-j2wdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.173556 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580764 4740 generic.go:334] "Generic (PLEG): container finished" podID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" exitCode=0 Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580892 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerDied","Data":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580971 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerDied","Data":"d61ae1b6020950c900af7ee3d49ade2d0b0882e3bedf8ab6d6113b5c5284ba1c"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580991 4740 scope.go:117] "RemoveContainer" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.584798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qzt4t" event={"ID":"7fe65e33-ae2e-4f40-b686-454192d6b538","Type":"ContainerStarted","Data":"ad823f48706259f0dff98372d32929b637cffd816f125563634c2ecc922fec61"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.584843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qzt4t" event={"ID":"7fe65e33-ae2e-4f40-b686-454192d6b538","Type":"ContainerStarted","Data":"7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.600786 4740 scope.go:117] "RemoveContainer" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: E0216 13:06:07.601782 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": container with ID starting with 901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46 not found: ID does not exist" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.601906 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} err="failed to get container status \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": rpc error: code = NotFound desc = could not find container \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": container with ID starting with 901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46 not found: ID does not exist" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.603972 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qzt4t" podStartSLOduration=1.359154813 podStartE2EDuration="1.603954647s" podCreationTimestamp="2026-02-16 13:06:06 +0000 UTC" firstStartedPulling="2026-02-16 13:06:06.777271625 +0000 UTC m=+794.153620346" lastFinishedPulling="2026-02-16 13:06:07.022071459 +0000 UTC m=+794.398420180" observedRunningTime="2026-02-16 13:06:07.602155377 +0000 UTC m=+794.978504098" watchObservedRunningTime="2026-02-16 13:06:07.603954647 +0000 UTC m=+794.980303368" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.626526 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.631347 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.886611 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:06:08 crc kubenswrapper[4740]: I0216 13:06:08.588480 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:06:09 crc kubenswrapper[4740]: I0216 13:06:09.289558 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" path="/var/lib/kubelet/pods/1d07676e-d3a5-489e-bd5a-61d7a59a039b/volumes" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.349175 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.349887 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.391449 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.664189 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657288 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:17 crc kubenswrapper[4740]: E0216 13:06:17.657847 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657864 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657999 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.658978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.661453 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gw6jq" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.672976 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808506 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.878426 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909180 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909660 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.935120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.979489 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.373575 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:18 crc kubenswrapper[4740]: W0216 13:06:18.377234 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7597307b_d3fd_4fa0_b370_a6d08b6a2daa.slice/crio-10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c WatchSource:0}: Error finding container 10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c: Status 404 returned error can't find the container with id 10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653181 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="45e4bb3a93c683eb126bdaae78036721713d442e89c860779044cb36b24641cb" exitCode=0 Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"45e4bb3a93c683eb126bdaae78036721713d442e89c860779044cb36b24641cb"} Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653468 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerStarted","Data":"10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c"} Feb 16 13:06:19 crc kubenswrapper[4740]: I0216 13:06:19.663285 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="9ee4c1a37103c1f1516301a53d50369a420119a42878e1e0f2c39067b42ff149" exitCode=0 Feb 16 13:06:19 crc kubenswrapper[4740]: I0216 13:06:19.663331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"9ee4c1a37103c1f1516301a53d50369a420119a42878e1e0f2c39067b42ff149"} Feb 16 13:06:20 crc kubenswrapper[4740]: I0216 13:06:20.672367 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="54f863a7f4725951e5ec2bf36ca009e95e79cf583a3f461e06dec8ce0e641e1d" exitCode=0 Feb 16 13:06:20 crc kubenswrapper[4740]: I0216 13:06:20.672580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"54f863a7f4725951e5ec2bf36ca009e95e79cf583a3f461e06dec8ce0e641e1d"} Feb 16 13:06:21 crc kubenswrapper[4740]: I0216 13:06:21.951118 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.065919 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle" (OuterVolumeSpecName: "bundle") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.072119 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj" (OuterVolumeSpecName: "kube-api-access-9tztj") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "kube-api-access-9tztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.079442 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util" (OuterVolumeSpecName: "util") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167211 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167241 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167251 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c"} Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698771 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698456 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.602708 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603244 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="util" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603256 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="util" Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603264 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603271 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603281 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="pull" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603287 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="pull" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603784 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.606381 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5h76t" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.622915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.699601 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.801037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.820749 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.923136 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:25 crc kubenswrapper[4740]: I0216 13:06:25.349870 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:25 crc kubenswrapper[4740]: I0216 13:06:25.718547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" event={"ID":"4c82699a-266c-43ce-acce-32c8aea26c10","Type":"ContainerStarted","Data":"2d53d6a466f21336466b4ea9dc38f352cc28f4453557bd8464569927a273c38d"} Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.742434 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" event={"ID":"4c82699a-266c-43ce-acce-32c8aea26c10","Type":"ContainerStarted","Data":"170c72ca7f2a2582ad4ff2b7e6a60aa43bc691572b294dcb16c51e81b61ce668"} Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.743031 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.773123 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" podStartSLOduration=2.148378333 podStartE2EDuration="5.773100018s" podCreationTimestamp="2026-02-16 13:06:24 +0000 UTC" firstStartedPulling="2026-02-16 13:06:25.369498626 +0000 UTC m=+812.745847347" lastFinishedPulling="2026-02-16 13:06:28.994220271 +0000 UTC m=+816.370569032" observedRunningTime="2026-02-16 13:06:29.767381532 +0000 UTC m=+817.143730253" watchObservedRunningTime="2026-02-16 13:06:29.773100018 +0000 UTC m=+817.149448739" Feb 16 13:06:34 crc kubenswrapper[4740]: I0216 13:06:34.925797 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.193456 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.195379 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.199655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.200698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zvdwm" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.200850 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.202481 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-f86fp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.219665 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.233931 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.260112 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.262422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.264885 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lkmts" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.276880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.284041 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.284845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.286805 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-npqnf" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.303727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.320764 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.321790 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.324773 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ff9b6" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.332725 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.332850 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.343898 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.345112 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.353290 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x9s8f" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.359491 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.375165 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.384845 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.385840 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.390508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nczlp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.398709 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.403277 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.403403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.406328 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-z7z7h" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.410079 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.429928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.430796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434475 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434591 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.435297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dvkmf" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.448033 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.449166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.456680 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.461463 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fr6sk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.461941 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.471303 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.474109 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.478401 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.495594 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.496664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.499091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2k29d" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.515511 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.520164 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.521039 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.521997 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.530690 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7rqlb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.533106 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.534340 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.535979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536229 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.537038 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.538342 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.544401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vbtbk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.559310 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.566974 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.568798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.569010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.585278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.591793 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.592997 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.595277 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nk8vd" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.604340 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.634198 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.635749 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.637060 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.637658 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638418 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.638893 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.638944 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.138923702 +0000 UTC m=+842.515272423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.639740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xj25" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.651531 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.653366 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.655204 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l8m2m" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.660217 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.671991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.673986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.680837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.684910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.687431 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.703625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.760465 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.769705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.771841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.778209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.779068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.796603 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.808415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.823529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f9994" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.825774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845231 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845278 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845359 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.845503 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.845545 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.345532875 +0000 UTC m=+842.721881586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.858688 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.870598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.875637 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.879659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.882638 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.897938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8drdp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.898935 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.913766 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.932966 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.935303 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.947571 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.948660 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.951830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.951940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.955349 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5fkd4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.959062 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.977161 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.985445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.009077 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.020707 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062732 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.070221 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.071540 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.073027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.074662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fs5g5" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.081878 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.083133 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.091144 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.092900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.096487 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-db5db" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.098051 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.117735 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.118663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.120928 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.121001 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.123299 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7wqpg" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.134957 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.155928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.157225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.164937 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165122 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.165282 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.165356 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.165340916 +0000 UTC m=+843.541689637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.169351 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q9wn8" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.184166 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.184530 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.197912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.201409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.231637 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.255207 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266678 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.267022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.284324 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.301372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.303882 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.303996 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.372022 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.372046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.372874 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.372931 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.372915891 +0000 UTC m=+843.749264602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373111 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373133 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.873126618 +0000 UTC m=+843.249475339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373537 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373560 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.873553541 +0000 UTC m=+843.249902262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.404403 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.404454 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.405154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.489033 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.511640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.517902 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.611651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.638737 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.656395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.666590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.871224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.874463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.878957 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.879024 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879160 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879209 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.879193171 +0000 UTC m=+844.255541892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879571 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879606 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.879597274 +0000 UTC m=+844.255945995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.893907 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.907716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.914386 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.914969 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" event={"ID":"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0","Type":"ContainerStarted","Data":"7d0f3608ca03eacaef9a545c7cf65f78e64cdce34bd3efd92621873484da969a"} Feb 16 13:06:55 crc kubenswrapper[4740]: W0216 13:06:55.915059 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d65efdf_ffc7_44cd_9dd1_1b4d9be2e2a4.slice/crio-3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3 WatchSource:0}: Error finding container 3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3: Status 404 returned error can't find the container with id 3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3 Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.917584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" event={"ID":"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17","Type":"ContainerStarted","Data":"d15fad7071d80c28ca043afc7c38fcf3eda19d434122892002dc81454df9733d"} Feb 16 13:06:55 crc kubenswrapper[4740]: W0216 13:06:55.921009 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e4da3c_6d3d_459a_86c2_01a4cdb81e51.slice/crio-a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6 WatchSource:0}: Error finding container a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6: Status 404 returned error can't find the container with id a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6 Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.927201 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" event={"ID":"90321508-9bb9-458e-ada0-001c779161c1","Type":"ContainerStarted","Data":"47e337add0de33eadbd3b4f34dfa4abf1892de3b8d0cfea22c19f062207445f6"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.930573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" event={"ID":"d6090007-0c13-4ea2-823c-3d95bb336fd8","Type":"ContainerStarted","Data":"b669f692e024f5d866ad7c1df52c3ebc51d809e755eeaad6389a6fa77398e917"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.934294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" event={"ID":"7f22cc6e-3761-4336-ab1d-74d9fd88432c","Type":"ContainerStarted","Data":"9b89598cdcbd10918871fe779bfb1498d7ac79fb9d16f4d8005e0a1bdc9ef53f"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.935531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" event={"ID":"f0032304-8799-4a85-964f-2017bfd2dbc8","Type":"ContainerStarted","Data":"4da242de7dd6711d83ce18efc38b20f8fb33a586b270cbf42e0fabc398651ac9"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.936380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" event={"ID":"fdf72675-c282-4f45-ad93-19aa643dcff8","Type":"ContainerStarted","Data":"2a40e526f6321380411e8c5383dfb3a989e186a2647040ccdd2f30f690eb4ac6"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.937285 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" event={"ID":"7f932811-4449-440a-b4c7-4817bfb33dd3","Type":"ContainerStarted","Data":"55e53ced0c147ac31a7db6db379fc2d1e2f7bcfb961db6ed3d6c10307eb87d2a"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.938610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" event={"ID":"fce48c02-3aa2-404b-a9a4-7ba789835be0","Type":"ContainerStarted","Data":"b9be96baaeafcbde450bac1e9f2e73fe8ce255ecf5fa6de29af393feae35af53"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.068924 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.076910 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.083577 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.087928 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba6767b2_e03c_4c12_880d_90bd809d9b48.slice/crio-af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197 WatchSource:0}: Error finding container af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197: Status 404 returned error can't find the container with id af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197 Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.089086 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121ee83b_e7f1_4302_9455_4cc6f53a07a5.slice/crio-011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f WatchSource:0}: Error finding container 011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f: Status 404 returned error can't find the container with id 011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.090078 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04f86073_3515_4d62_a02a_c63d06ecdaaa.slice/crio-599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db WatchSource:0}: Error finding container 599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db: Status 404 returned error can't find the container with id 599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.093751 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlj89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-7t92r_openstack-operators(121ee83b-e7f1-4302-9455-4cc6f53a07a5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.095141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.177743 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.185093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.185238 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.185289 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.185270985 +0000 UTC m=+845.561619706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.185912 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.189944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.190769 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod519c5b9e_ed4f_4cba_a731_70a22209f642.slice/crio-4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065 WatchSource:0}: Error finding container 4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065: Status 404 returned error can't find the container with id 4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065 Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.193993 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wf7z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6865b_openstack-operators(519c5b9e-ed4f-4cba-a731-70a22209f642): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.195197 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.212694 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlnd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-64xmt_openstack-operators(c6400043-1325-4af3-8c79-4b383441668c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.212764 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dt6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-58cw4_openstack-operators(7666c640-a9f4-4e09-b79c-7fd31116bd79): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.214456 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.214605 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.317408 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.321269 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.327913 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e6434b1_64ba_481f_b001_8a465254dc0a.slice/crio-bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41 WatchSource:0}: Error finding container bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41: Status 404 returned error can't find the container with id bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41 Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.335385 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod001719d5_3a51_4f6b_b316_9e98f53ed575.slice/crio-9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244 WatchSource:0}: Error finding container 9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244: Status 404 returned error can't find the container with id 9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244 Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.339860 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqdp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-pbkbj_openstack-operators(001719d5-3a51-4f6b-b316-9e98f53ed575): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.341230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.389306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.389455 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.390952 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.390931338 +0000 UTC m=+845.767280059 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.896556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.896611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896736 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896778 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896804 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.896784734 +0000 UTC m=+846.273133455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896880 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.896870777 +0000 UTC m=+846.273219498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.953453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" event={"ID":"519c5b9e-ed4f-4cba-a731-70a22209f642","Type":"ContainerStarted","Data":"4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065"} Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.958947 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.960159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" event={"ID":"3e6434b1-64ba-481f-b001-8a465254dc0a","Type":"ContainerStarted","Data":"bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.991587 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" event={"ID":"ba6767b2-e03c-4c12-880d-90bd809d9b48","Type":"ContainerStarted","Data":"af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.999179 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" event={"ID":"a49c1d67-8cf7-4429-ac73-da13d129304d","Type":"ContainerStarted","Data":"0385dab7ad9131cea67b9c8c27e8b5b113798f5cf90514a26bd1963ffd3614b3"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.006744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" event={"ID":"7666c640-a9f4-4e09-b79c-7fd31116bd79","Type":"ContainerStarted","Data":"a65407aff5a17c047ca3fc566d9eb9da30e8b17d0332823937c418a04667a0a0"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.008458 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" event={"ID":"121ee83b-e7f1-4302-9455-4cc6f53a07a5","Type":"ContainerStarted","Data":"011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f"} Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.010751 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.017158 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.024414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" event={"ID":"00e4da3c-6d3d-459a-86c2-01a4cdb81e51","Type":"ContainerStarted","Data":"a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.039421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" event={"ID":"04f86073-3515-4d62-a02a-c63d06ecdaaa","Type":"ContainerStarted","Data":"599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.043615 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" event={"ID":"001719d5-3a51-4f6b-b316-9e98f53ed575","Type":"ContainerStarted","Data":"9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.050306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" event={"ID":"c6400043-1325-4af3-8c79-4b383441668c","Type":"ContainerStarted","Data":"5a8d5f49d18add468072f26ba9b7bf5bff6f197fcc497101efb9222b6c62cd23"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.055089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" event={"ID":"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4","Type":"ContainerStarted","Data":"3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3"} Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.059276 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.059366 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.064961 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065516 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065577 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065629 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065704 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.218541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.218669 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.218825 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.218789709 +0000 UTC m=+849.595138430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.423045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.424696 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.424745 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.42473177 +0000 UTC m=+849.801080491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.930014 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.930096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930256 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930317 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.930297448 +0000 UTC m=+850.306646179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930742 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930782 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.930772513 +0000 UTC m=+850.307121234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: I0216 13:07:02.289153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.289465 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.290668 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:10.290645583 +0000 UTC m=+857.666994304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: I0216 13:07:02.493955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.494175 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.494257 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:10.494238118 +0000 UTC m=+857.870586829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: I0216 13:07:03.001105 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:03 crc kubenswrapper[4740]: I0216 13:07:03.001183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001317 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001400 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001436 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:11.001410627 +0000 UTC m=+858.377759358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001481 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:11.001460859 +0000 UTC m=+858.377809580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.033291 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.034058 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m64zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qttct_openstack-operators(3e6434b1-64ba-481f-b001-8a465254dc0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.035628 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podUID="3e6434b1-64ba-481f-b001-8a465254dc0a" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.143303 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podUID="3e6434b1-64ba-481f-b001-8a465254dc0a" Feb 16 13:07:10 crc kubenswrapper[4740]: I0216 13:07:10.319971 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.320101 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.320151 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:26.320137708 +0000 UTC m=+873.696486429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: I0216 13:07:10.523252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.523461 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.523550 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:26.523526927 +0000 UTC m=+873.899875708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.563126 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.563386 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjjht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-z2m7j_openstack-operators(fce48c02-3aa2-404b-a9a4-7ba789835be0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.565388 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podUID="fce48c02-3aa2-404b-a9a4-7ba789835be0" Feb 16 13:07:11 crc kubenswrapper[4740]: I0216 13:07:11.030250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:11 crc kubenswrapper[4740]: I0216 13:07:11.030447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030497 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030592 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030603 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:27.030578922 +0000 UTC m=+874.406927653 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030669 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:27.030650905 +0000 UTC m=+874.406999706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.132665 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.132870 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-fn4g2_openstack-operators(ba6767b2-e03c-4c12-880d-90bd809d9b48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.134419 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podUID="ba6767b2-e03c-4c12-880d-90bd809d9b48" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.162085 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podUID="ba6767b2-e03c-4c12-880d-90bd809d9b48" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.162298 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podUID="fce48c02-3aa2-404b-a9a4-7ba789835be0" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.156454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" event={"ID":"04f86073-3515-4d62-a02a-c63d06ecdaaa","Type":"ContainerStarted","Data":"84e0c2ad86b4b346bf7c5e722ea5b1553d4a9ceffbd1769f6e2ab3f0bbebf2be"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.157526 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.163586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" event={"ID":"fdf72675-c282-4f45-ad93-19aa643dcff8","Type":"ContainerStarted","Data":"ed84f9bc75348d4de5e76b4a1258f7231376e554f543995fc9f51e997bea6b8a"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.164203 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.173094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" event={"ID":"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17","Type":"ContainerStarted","Data":"5e12839da7ef9d73f0abaf53e6765c0811604425ebc887f5fd89416794867fad"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.173735 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.176958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" event={"ID":"90321508-9bb9-458e-ada0-001c779161c1","Type":"ContainerStarted","Data":"0663476f63d36bde41abe9508a187caac021094503bed92dddba2140dfa81573"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.177350 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.190033 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" event={"ID":"7f22cc6e-3761-4336-ab1d-74d9fd88432c","Type":"ContainerStarted","Data":"97e6ced6e91be15d4adf0e418a7c0e5edd3962baae38b8b1fbd006434c51def0"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.190178 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.195161 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" event={"ID":"00e4da3c-6d3d-459a-86c2-01a4cdb81e51","Type":"ContainerStarted","Data":"d1913dc70e1cc4caba3d347773ae29e7a522c4d9cff123497501ebf4d5b011d3"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.195361 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.205640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" event={"ID":"a49c1d67-8cf7-4429-ac73-da13d129304d","Type":"ContainerStarted","Data":"d9bdf62562c2e6bd794cd16d311402868a6ad520153614fecea588155a8ebd23"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.206453 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.217933 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" podStartSLOduration=3.786177252 podStartE2EDuration="18.217910801s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.092980136 +0000 UTC m=+843.469328867" lastFinishedPulling="2026-02-16 13:07:10.524713695 +0000 UTC m=+857.901062416" observedRunningTime="2026-02-16 13:07:12.20371048 +0000 UTC m=+859.580059201" watchObservedRunningTime="2026-02-16 13:07:12.217910801 +0000 UTC m=+859.594259522" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.218951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" event={"ID":"7f932811-4449-440a-b4c7-4817bfb33dd3","Type":"ContainerStarted","Data":"86c23ad09326d1bfc0140d4535854a9f7dba8a9e2b8579db0832aaa56b1d9748"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.219060 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.228385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" event={"ID":"d6090007-0c13-4ea2-823c-3d95bb336fd8","Type":"ContainerStarted","Data":"51237ef57631201d6327bcfabb8e5bd4b44ee141f5b4fade5936342435c24da9"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.228967 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.231232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" event={"ID":"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0","Type":"ContainerStarted","Data":"ad8f5f3cb83d2ac1177f2f7eb89142497decd4142cde55e7f0d38b2163308524"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.231633 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.233854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" event={"ID":"f0032304-8799-4a85-964f-2017bfd2dbc8","Type":"ContainerStarted","Data":"0641538bef3b64eaf3658ca0464a0ecb233e70aa265fac3e7c1dcd2bb70648d9"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.234315 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.235841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" event={"ID":"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4","Type":"ContainerStarted","Data":"d7d93153ca61ea6d634b2eae1fab9ea259c611289280ff2c9db9a61198ba1f99"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.236176 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.272860 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" podStartSLOduration=3.440859663 podStartE2EDuration="18.272841796s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.691543034 +0000 UTC m=+843.067891755" lastFinishedPulling="2026-02-16 13:07:10.523525167 +0000 UTC m=+857.899873888" observedRunningTime="2026-02-16 13:07:12.246013994 +0000 UTC m=+859.622362715" watchObservedRunningTime="2026-02-16 13:07:12.272841796 +0000 UTC m=+859.649190517" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.273038 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" podStartSLOduration=3.100680738 podStartE2EDuration="18.273034902s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.930534118 +0000 UTC m=+843.306882839" lastFinishedPulling="2026-02-16 13:07:11.102888282 +0000 UTC m=+858.479237003" observedRunningTime="2026-02-16 13:07:12.268736072 +0000 UTC m=+859.645084803" watchObservedRunningTime="2026-02-16 13:07:12.273034902 +0000 UTC m=+859.649383623" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.308251 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" podStartSLOduration=3.47186374 podStartE2EDuration="18.308235495s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.689420505 +0000 UTC m=+843.065769226" lastFinishedPulling="2026-02-16 13:07:10.52579218 +0000 UTC m=+857.902140981" observedRunningTime="2026-02-16 13:07:12.307928155 +0000 UTC m=+859.684276866" watchObservedRunningTime="2026-02-16 13:07:12.308235495 +0000 UTC m=+859.684584216" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.336875 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" podStartSLOduration=2.624265129 podStartE2EDuration="18.336857186s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.387420991 +0000 UTC m=+842.763769712" lastFinishedPulling="2026-02-16 13:07:11.100013058 +0000 UTC m=+858.476361769" observedRunningTime="2026-02-16 13:07:12.336299787 +0000 UTC m=+859.712648508" watchObservedRunningTime="2026-02-16 13:07:12.336857186 +0000 UTC m=+859.713205917" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.363603 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" podStartSLOduration=3.515790108 podStartE2EDuration="18.363582775s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.678641384 +0000 UTC m=+843.054990105" lastFinishedPulling="2026-02-16 13:07:10.526434051 +0000 UTC m=+857.902782772" observedRunningTime="2026-02-16 13:07:12.356550956 +0000 UTC m=+859.732899677" watchObservedRunningTime="2026-02-16 13:07:12.363582775 +0000 UTC m=+859.739931486" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.382630 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" podStartSLOduration=3.77535102 podStartE2EDuration="18.382606842s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.91919246 +0000 UTC m=+843.295541181" lastFinishedPulling="2026-02-16 13:07:10.526448282 +0000 UTC m=+857.902797003" observedRunningTime="2026-02-16 13:07:12.377917559 +0000 UTC m=+859.754266290" watchObservedRunningTime="2026-02-16 13:07:12.382606842 +0000 UTC m=+859.758955563" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.397938 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" podStartSLOduration=3.7894790499999997 podStartE2EDuration="18.39792288s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.916822863 +0000 UTC m=+843.293171584" lastFinishedPulling="2026-02-16 13:07:10.525266693 +0000 UTC m=+857.901615414" observedRunningTime="2026-02-16 13:07:12.396744672 +0000 UTC m=+859.773093393" watchObservedRunningTime="2026-02-16 13:07:12.39792288 +0000 UTC m=+859.774271601" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.422483 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" podStartSLOduration=3.261661531 podStartE2EDuration="18.422458028s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.364376513 +0000 UTC m=+842.740725234" lastFinishedPulling="2026-02-16 13:07:10.52517301 +0000 UTC m=+857.901521731" observedRunningTime="2026-02-16 13:07:12.412824994 +0000 UTC m=+859.789173725" watchObservedRunningTime="2026-02-16 13:07:12.422458028 +0000 UTC m=+859.798806759" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.433902 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" podStartSLOduration=3.530894457 podStartE2EDuration="18.433883328s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.623009716 +0000 UTC m=+842.999358437" lastFinishedPulling="2026-02-16 13:07:10.525998597 +0000 UTC m=+857.902347308" observedRunningTime="2026-02-16 13:07:12.433554428 +0000 UTC m=+859.809903149" watchObservedRunningTime="2026-02-16 13:07:12.433883328 +0000 UTC m=+859.810232059" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.457744 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" podStartSLOduration=3.242190717 podStartE2EDuration="18.457723583s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.884958118 +0000 UTC m=+843.261306839" lastFinishedPulling="2026-02-16 13:07:11.100490984 +0000 UTC m=+858.476839705" observedRunningTime="2026-02-16 13:07:12.447290924 +0000 UTC m=+859.823639665" watchObservedRunningTime="2026-02-16 13:07:12.457723583 +0000 UTC m=+859.834072314" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.483197 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" podStartSLOduration=3.246012681 podStartE2EDuration="18.48317076s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.289771199 +0000 UTC m=+842.666119910" lastFinishedPulling="2026-02-16 13:07:10.526929278 +0000 UTC m=+857.903277989" observedRunningTime="2026-02-16 13:07:12.478540739 +0000 UTC m=+859.854889460" watchObservedRunningTime="2026-02-16 13:07:12.48317076 +0000 UTC m=+859.859519481" Feb 16 13:07:15 crc kubenswrapper[4740]: I0216 13:07:15.575072 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:15 crc kubenswrapper[4740]: I0216 13:07:15.575474 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.277549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" event={"ID":"7666c640-a9f4-4e09-b79c-7fd31116bd79","Type":"ContainerStarted","Data":"782e3861203e4b66f7375516abf19be229ad01a72ce37691a3dcfc0d91b53915"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.278098 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.279651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" event={"ID":"121ee83b-e7f1-4302-9455-4cc6f53a07a5","Type":"ContainerStarted","Data":"df3d9b98206d044894775611ac1ef31c9aa82f4b3330bdc50e132795e701feca"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.279802 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" event={"ID":"001719d5-3a51-4f6b-b316-9e98f53ed575","Type":"ContainerStarted","Data":"cb92ec2f351b1d9e65c1ba15213791e2ba9ddc8481f16300d327311889bd54b7"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287534 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" event={"ID":"c6400043-1325-4af3-8c79-4b383441668c","Type":"ContainerStarted","Data":"ee1a832448add5257d492e9462714124a4be4dbb893756aacc6fd3e59955c8ba"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" event={"ID":"519c5b9e-ed4f-4cba-a731-70a22209f642","Type":"ContainerStarted","Data":"1b47cebdd371945a19bfa9f9e749331f6957e508cc1023abd834463c292cf490"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287914 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.299844 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podStartSLOduration=3.455433416 podStartE2EDuration="23.299826993s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.21249115 +0000 UTC m=+843.588839871" lastFinishedPulling="2026-02-16 13:07:16.056884727 +0000 UTC m=+863.433233448" observedRunningTime="2026-02-16 13:07:17.294785299 +0000 UTC m=+864.671134020" watchObservedRunningTime="2026-02-16 13:07:17.299826993 +0000 UTC m=+864.676175714" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.309938 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podStartSLOduration=3.592764379 podStartE2EDuration="23.309920682s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.339727094 +0000 UTC m=+843.716075815" lastFinishedPulling="2026-02-16 13:07:16.056883397 +0000 UTC m=+863.433232118" observedRunningTime="2026-02-16 13:07:17.30679502 +0000 UTC m=+864.683143741" watchObservedRunningTime="2026-02-16 13:07:17.309920682 +0000 UTC m=+864.686269403" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.323284 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podStartSLOduration=3.435780948 podStartE2EDuration="23.323261195s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.212526952 +0000 UTC m=+843.588875673" lastFinishedPulling="2026-02-16 13:07:16.100007189 +0000 UTC m=+863.476355920" observedRunningTime="2026-02-16 13:07:17.31849647 +0000 UTC m=+864.694845191" watchObservedRunningTime="2026-02-16 13:07:17.323261195 +0000 UTC m=+864.699609916" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.339045 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podStartSLOduration=3.302223367 podStartE2EDuration="23.339024907s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.093617247 +0000 UTC m=+843.469965968" lastFinishedPulling="2026-02-16 13:07:16.130418747 +0000 UTC m=+863.506767508" observedRunningTime="2026-02-16 13:07:17.332967441 +0000 UTC m=+864.709316192" watchObservedRunningTime="2026-02-16 13:07:17.339024907 +0000 UTC m=+864.715373628" Feb 16 13:07:22 crc kubenswrapper[4740]: I0216 13:07:22.308074 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podStartSLOduration=8.444854984 podStartE2EDuration="28.308045902s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.193782352 +0000 UTC m=+843.570131073" lastFinishedPulling="2026-02-16 13:07:16.05697328 +0000 UTC m=+863.433321991" observedRunningTime="2026-02-16 13:07:17.355265205 +0000 UTC m=+864.731613936" watchObservedRunningTime="2026-02-16 13:07:22.308045902 +0000 UTC m=+869.684394623" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.342586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" event={"ID":"fce48c02-3aa2-404b-a9a4-7ba789835be0","Type":"ContainerStarted","Data":"2d3cbc07bcfc6f98ac797b064aa71ca152850ed6b1531ee6a4137422447ec884"} Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.343326 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.344165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" event={"ID":"3e6434b1-64ba-481f-b001-8a465254dc0a","Type":"ContainerStarted","Data":"7cfcf45a9d3c0e3a79357088f866787e4046616eef45b1b7cb38b2b088f4f874"} Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.370255 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podStartSLOduration=2.790622815 podStartE2EDuration="30.370237147s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.916638147 +0000 UTC m=+843.292986868" lastFinishedPulling="2026-02-16 13:07:23.496252479 +0000 UTC m=+870.872601200" observedRunningTime="2026-02-16 13:07:24.365517014 +0000 UTC m=+871.741865755" watchObservedRunningTime="2026-02-16 13:07:24.370237147 +0000 UTC m=+871.746585878" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.384193 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podStartSLOduration=2.940517075 podStartE2EDuration="30.3841762s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.330146983 +0000 UTC m=+843.706495704" lastFinishedPulling="2026-02-16 13:07:23.773806108 +0000 UTC m=+871.150154829" observedRunningTime="2026-02-16 13:07:24.38017177 +0000 UTC m=+871.756520501" watchObservedRunningTime="2026-02-16 13:07:24.3841762 +0000 UTC m=+871.760524921" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.524072 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.541519 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.593545 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.640079 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.662919 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.694873 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.711376 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.767475 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.838199 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.964714 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.981002 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.023554 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.185489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.187454 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.255637 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.258181 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.303547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.493109 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.521184 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.358402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" event={"ID":"ba6767b2-e03c-4c12-880d-90bd809d9b48","Type":"ContainerStarted","Data":"b67dd3080518a824d8576b9acac0f3e600b96fde499ccc9f8e8d8181a0798355"} Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.358927 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.367844 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.374673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.378747 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podStartSLOduration=2.4767722770000002 podStartE2EDuration="32.378723607s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.093324888 +0000 UTC m=+843.469673609" lastFinishedPulling="2026-02-16 13:07:25.995276198 +0000 UTC m=+873.371624939" observedRunningTime="2026-02-16 13:07:26.3754256 +0000 UTC m=+873.751774381" watchObservedRunningTime="2026-02-16 13:07:26.378723607 +0000 UTC m=+873.755072328" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.532740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.571423 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.575493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.799665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.962137 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:07:26 crc kubenswrapper[4740]: W0216 13:07:26.974627 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eba30c7_3dab_4b8f_8a22_2dae642a6ac5.slice/crio-12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85 WatchSource:0}: Error finding container 12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85: Status 404 returned error can't find the container with id 12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85 Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.059518 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:07:27 crc kubenswrapper[4740]: W0216 13:07:27.066595 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76134787_0eff_47bd_982e_16c2c4f98f19.slice/crio-bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825 WatchSource:0}: Error finding container bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825: Status 404 returned error can't find the container with id bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825 Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.077852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.077911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.083064 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.083159 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.335285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.369760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" event={"ID":"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5","Type":"ContainerStarted","Data":"12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85"} Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.371918 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" event={"ID":"76134787-0eff-47bd-982e-16c2c4f98f19","Type":"ContainerStarted","Data":"bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825"} Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.595068 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:07:27 crc kubenswrapper[4740]: W0216 13:07:27.602931 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode749615e_a716_4e6e_8830_947b128e4e58.slice/crio-4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34 WatchSource:0}: Error finding container 4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34: Status 404 returned error can't find the container with id 4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34 Feb 16 13:07:28 crc kubenswrapper[4740]: I0216 13:07:28.378243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" event={"ID":"e749615e-a716-4e6e-8830-947b128e4e58","Type":"ContainerStarted","Data":"4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34"} Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:33.413192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" event={"ID":"e749615e-a716-4e6e-8830-947b128e4e58","Type":"ContainerStarted","Data":"56f7e74b441774cbf8d353d6d64eee38fb2c980ad13142bf575e045081561cb5"} Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.419106 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.457794 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" podStartSLOduration=40.457768363 podStartE2EDuration="40.457768363s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:07:34.445266256 +0000 UTC m=+881.821614987" watchObservedRunningTime="2026-02-16 13:07:34.457768363 +0000 UTC m=+881.834117084" Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.935951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:07:35 crc kubenswrapper[4740]: I0216 13:07:35.076625 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.461293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" event={"ID":"76134787-0eff-47bd-982e-16c2c4f98f19","Type":"ContainerStarted","Data":"c802d10363fd4c0de847e852822b166249f1a99e639fc391869de9e542865e15"} Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.461930 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.462921 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" event={"ID":"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5","Type":"ContainerStarted","Data":"09bbb730533db26d01ef27bda2c918a420e8d92fb9e674e07228303420ab8218"} Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.463419 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.494963 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" podStartSLOduration=34.062029282 podStartE2EDuration="46.494943874s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:07:27.068750618 +0000 UTC m=+874.445099339" lastFinishedPulling="2026-02-16 13:07:39.50166521 +0000 UTC m=+886.878013931" observedRunningTime="2026-02-16 13:07:40.487193952 +0000 UTC m=+887.863542673" watchObservedRunningTime="2026-02-16 13:07:40.494943874 +0000 UTC m=+887.871292595" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.513881 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" podStartSLOduration=33.994115504 podStartE2EDuration="46.513857338s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:07:26.978802295 +0000 UTC m=+874.355151016" lastFinishedPulling="2026-02-16 13:07:39.498544129 +0000 UTC m=+886.874892850" observedRunningTime="2026-02-16 13:07:40.505929031 +0000 UTC m=+887.882277772" watchObservedRunningTime="2026-02-16 13:07:40.513857338 +0000 UTC m=+887.890206059" Feb 16 13:07:45 crc kubenswrapper[4740]: I0216 13:07:45.575882 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:45 crc kubenswrapper[4740]: I0216 13:07:45.576304 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:07:46 crc kubenswrapper[4740]: I0216 13:07:46.543130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:46 crc kubenswrapper[4740]: I0216 13:07:46.810707 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:47 crc kubenswrapper[4740]: I0216 13:07:47.341718 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.864616 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.866125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.870278 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.882169 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.882408 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.883196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-f9mjd" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.897416 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.935564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.935620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.995510 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.996822 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.998359 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036690 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036907 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.037000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.038279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.047283 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.062295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.138574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.139088 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.154605 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.211019 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.313232 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.653141 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.746777 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.838290 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" event={"ID":"6c71f0c5-67a1-4d67-b2de-dba8295ef084","Type":"ContainerStarted","Data":"5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe"} Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.843632 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" event={"ID":"6a869f8c-c538-49fc-9e00-9b8b4b298687","Type":"ContainerStarted","Data":"72745cfed959172c621e0861c20ad46e7969585903965049f95725965dd4a30e"} Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.529896 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.557725 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.559028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.571793 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690360 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690406 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.691488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.692609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.735926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.880187 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.887793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.907779 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.909233 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.924500 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096514 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.198879 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.198977 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.199026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.200355 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.200390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.218877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.290203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.419431 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:09 crc kubenswrapper[4740]: W0216 13:08:09.459734 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23232c7f_b058_4eec_850d_b28aecf39a2f.slice/crio-4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72 WatchSource:0}: Error finding container 4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72: Status 404 returned error can't find the container with id 4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72 Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.547450 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:09 crc kubenswrapper[4740]: W0216 13:08:09.556535 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d209d0f_d8e6_4e45_aca9_f1e3245be3f8.slice/crio-e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf WatchSource:0}: Error finding container e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf: Status 404 returned error can't find the container with id e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.729302 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.730565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736344 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736720 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736892 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.737058 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.737188 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.738888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c72m7" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.740178 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.742191 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.887078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerStarted","Data":"e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf"} Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.888298 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerStarted","Data":"4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72"} Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910120 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910234 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.014994 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015102 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015118 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015230 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.018691 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.020527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021753 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.023690 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.030441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.041707 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.041773 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.042000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.071307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.089558 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.091322 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.095631 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.095981 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.100730 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.100880 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.101312 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.105939 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.106243 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x99bs" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.106976 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.107089 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.221957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222107 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222139 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222182 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222256 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323342 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323708 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323724 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323845 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.325576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.325802 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.326113 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.326705 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.327798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.328045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.340521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.360732 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.361257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.361649 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.374580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.375913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.464317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.673712 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.694149 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.897115 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"934ceceace7365e9c0090e9a012126311d06e3cf25d1f4641361df1885a08c73"} Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.033802 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.376882 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.378446 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.382408 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.383264 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7lmgx" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.384018 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.384622 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.387959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.390065 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558834 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660828 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660863 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.661766 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.662791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.663177 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.663297 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.666442 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.681479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.682210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.687461 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.689529 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.715229 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.591547 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.593161 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.595767 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596139 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kzk2x" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596488 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596625 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.601138 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780194 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780226 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.822356 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.824968 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.829715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lbppm" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.830003 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.834031 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.843757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.878889 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.891042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893146 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893238 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893942 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.895027 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.907037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.909639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.910053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.911101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.916417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.955059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.957322 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994492 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994581 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.096688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.097837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.098476 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.100656 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.102028 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.113236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.121284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.126609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.154189 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.217285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.299310 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.131781 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.133451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.136543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wpsbn" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.145304 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.243013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.344614 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.368337 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.456169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575221 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575286 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575337 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.576060 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.576115 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" gracePeriod=600 Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991612 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" exitCode=0 Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991709 4740 scope.go:117] "RemoveContainer" containerID="147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.411343 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.412855 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.414718 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.416564 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.417418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6p6bf" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.459888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493486 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493654 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493687 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493707 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.524875 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.526484 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.536702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595016 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.597734 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.597797 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595799 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.598670 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.604157 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.604729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.628088 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700315 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700858 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700954 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.701039 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.702934 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.729498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.819136 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.852770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.014190 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"57a77e39696732ba0c2e89d52e10f74cd6c56edebaba2ddd54807982f361b511"} Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.290208 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.291532 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.294618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.295791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.296063 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zf6qv" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.303183 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.304904 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.305107 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409218 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409287 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409308 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409425 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.510946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511371 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511434 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511545 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511798 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.516583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.519520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.537599 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.539970 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.551774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.633801 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.068248 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.073821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.078706 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.078795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.079193 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hfz5r" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.079238 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.082019 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154236 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256312 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256342 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257021 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257711 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.258057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.270100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.270220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.274330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.279484 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.290482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.395855 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.417838 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.425729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.430433 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495284 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495344 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596972 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.622029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.740634 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:26 crc kubenswrapper[4740]: I0216 13:08:26.811942 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.433122 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.434375 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7mmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vnpwr_openstack(6c71f0c5-67a1-4d67-b2de-dba8295ef084): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.435752 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" podUID="6c71f0c5-67a1-4d67-b2de-dba8295ef084" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.467012 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.467204 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npxln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-969lr_openstack(6a869f8c-c538-49fc-9e00-9b8b4b298687): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.468367 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" podUID="6a869f8c-c538-49fc-9e00-9b8b4b298687" Feb 16 13:08:28 crc kubenswrapper[4740]: I0216 13:08:28.077084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"e26ac5cb88656bf9aee80557f4562f056cae18fedfbcc93ad5c41d158e7fe30d"} Feb 16 13:08:28 crc kubenswrapper[4740]: I0216 13:08:28.931210 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:28.999637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"6a869f8c-c538-49fc-9e00-9b8b4b298687\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.000102 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"6a869f8c-c538-49fc-9e00-9b8b4b298687\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.000661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config" (OuterVolumeSpecName: "config") pod "6a869f8c-c538-49fc-9e00-9b8b4b298687" (UID: "6a869f8c-c538-49fc-9e00-9b8b4b298687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.001277 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.004772 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln" (OuterVolumeSpecName: "kube-api-access-npxln") pod "6a869f8c-c538-49fc-9e00-9b8b4b298687" (UID: "6a869f8c-c538-49fc-9e00-9b8b4b298687"). InnerVolumeSpecName "kube-api-access-npxln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.097675 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.099657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" event={"ID":"6a869f8c-c538-49fc-9e00-9b8b4b298687","Type":"ContainerDied","Data":"72745cfed959172c621e0861c20ad46e7969585903965049f95725965dd4a30e"} Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.104689 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.108132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" event={"ID":"6c71f0c5-67a1-4d67-b2de-dba8295ef084","Type":"ContainerDied","Data":"5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe"} Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.108171 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.123103 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207302 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207455 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.208295 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config" (OuterVolumeSpecName: "config") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.208732 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.220051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx" (OuterVolumeSpecName: "kube-api-access-t7mmx") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "kube-api-access-t7mmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.233590 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.240929 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313555 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313975 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313991 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.314172 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a869f8c-c538-49fc-9e00-9b8b4b298687" path="/var/lib/kubelet/pods/6a869f8c-c538-49fc-9e00-9b8b4b298687/volumes" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.376139 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.385932 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.395232 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.402011 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.440605 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.491646 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2a3679_b8ef_4221_a9f6_ccd863696aa8.slice/crio-fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b WatchSource:0}: Error finding container fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b: Status 404 returned error can't find the container with id fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.495013 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffdca64_bf57_49ca_9d8d_c6c752e59a37.slice/crio-866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d WatchSource:0}: Error finding container 866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d: Status 404 returned error can't find the container with id 866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.581299 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.597317 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.675664 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.755996 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04335a5d_7cac_4a47_982c_70cae9db69ff.slice/crio-2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa WatchSource:0}: Error finding container 2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa: Status 404 returned error can't find the container with id 2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.850146 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba53212_5a6f_45cb_9547_cccd4b36aa32.slice/crio-88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd WatchSource:0}: Error finding container 88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd: Status 404 returned error can't find the container with id 88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.887954 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.128162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79" event={"ID":"04335a5d-7cac-4a47-982c-70cae9db69ff","Type":"ContainerStarted","Data":"2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.131215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.135330 4740 generic.go:334] "Generic (PLEG): container finished" podID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.135406 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.150292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.158950 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.159020 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.159047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerStarted","Data":"e25ea221b5fa4528f6319f69abf2088a3814b82a1e688ade98fa8da437436a8d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.161717 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"d7a120c44daee76d758cad21793f47e0c8eddcda3951acfaa97024d72c57686d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.164151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"1fd3890eb822343ee419a082a86d7c0f7f37da9e46f4c355fdf04fe11a7d6219"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.166305 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16622824-15d7-4ff1-8eac-85fe5d8da9db","Type":"ContainerStarted","Data":"fd1ec77c679da4c311a1b5aeb0eb5c952452696d2fb82d345cca583a7e0e44ee"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.169154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.171794 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerStarted","Data":"866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.175955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"c2720b066785d1f7aeb61e1aee929c024176a2112c4dc817a63d2876ff085255"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180722 4740 generic.go:334] "Generic (PLEG): container finished" podID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180790 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.272023 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.277961 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.192055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.196168 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930" exitCode=0 Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.196419 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.199222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.300800 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c71f0c5-67a1-4d67-b2de-dba8295ef084" path="/var/lib/kubelet/pods/6c71f0c5-67a1-4d67-b2de-dba8295ef084/volumes" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.404436 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.405772 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.408980 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.417196 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468886 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469028 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469079 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571364 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571638 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.572187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.583284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.591331 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.595784 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.606835 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.626641 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.628554 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.632627 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.651537 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.742606 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785959 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.815673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.870874 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.912534 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.915403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.920607 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.932583 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988446 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988860 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.016956 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.092299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.093880 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.094995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.095286 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.115504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.258676 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:37 crc kubenswrapper[4740]: I0216 13:08:37.952123 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.047655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.049338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.051980 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.060265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.103389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.103719 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.104015 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.142967 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.206897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.234009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.259914 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerStarted","Data":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.260098 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" containerID="cri-o://0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" gracePeriod=10 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.260186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.265891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerStarted","Data":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.265839 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" containerID="cri-o://70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" gracePeriod=10 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.266051 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.313696 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" podStartSLOduration=10.964414827 podStartE2EDuration="30.313676422s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:09.560142838 +0000 UTC m=+916.936491569" lastFinishedPulling="2026-02-16 13:08:28.909404443 +0000 UTC m=+936.285753164" observedRunningTime="2026-02-16 13:08:38.311577296 +0000 UTC m=+945.687926027" watchObservedRunningTime="2026-02-16 13:08:38.313676422 +0000 UTC m=+945.690025143" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.316436 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" podStartSLOduration=10.980704879 podStartE2EDuration="30.316429138s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:09.46171547 +0000 UTC m=+916.838064191" lastFinishedPulling="2026-02-16 13:08:28.797439729 +0000 UTC m=+936.173788450" observedRunningTime="2026-02-16 13:08:38.293611781 +0000 UTC m=+945.669960502" watchObservedRunningTime="2026-02-16 13:08:38.316429138 +0000 UTC m=+945.692777859" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.377133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.558054 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92418e50_20f2_495c_9b06_963a5cd506d1.slice/crio-9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578 WatchSource:0}: Error finding container 9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578: Status 404 returned error can't find the container with id 9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578 Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.566076 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9909a57b_336c_4687_855f_495a78d21af7.slice/crio-ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c WatchSource:0}: Error finding container ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c: Status 404 returned error can't find the container with id ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.569774 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad1b2300_a42b_4a99_b186_7661bb410a36.slice/crio-4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92 WatchSource:0}: Error finding container 4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92: Status 404 returned error can't find the container with id 4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.971599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.017796 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.018009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.018059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.078617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt" (OuterVolumeSpecName: "kube-api-access-rckbt") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "kube-api-access-rckbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.090092 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.119878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.119950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.120069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.120402 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.146847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h" (OuterVolumeSpecName: "kube-api-access-xlv5h") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "kube-api-access-xlv5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.222049 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.228859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.281854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.297952 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.324232 4740 generic.go:334] "Generic (PLEG): container finished" podID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.324448 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.354438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.354707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config" (OuterVolumeSpecName: "config") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.372421 4740 generic.go:334] "Generic (PLEG): container finished" podID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.372561 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.415164 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433063 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433091 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433101 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.435579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config" (OuterVolumeSpecName: "config") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.436801 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.728440587 podStartE2EDuration="27.436786132s" podCreationTimestamp="2026-02-16 13:08:12 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.485217931 +0000 UTC m=+936.861566652" lastFinishedPulling="2026-02-16 13:08:37.193563466 +0000 UTC m=+944.569912197" observedRunningTime="2026-02-16 13:08:39.435915445 +0000 UTC m=+946.812264166" watchObservedRunningTime="2026-02-16 13:08:39.436786132 +0000 UTC m=+946.813134853" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529616 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529976 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529997 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerStarted","Data":"6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530018 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530039 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b4j4m" event={"ID":"ad1b2300-a42b-4a99-b186-7661bb410a36","Type":"ContainerStarted","Data":"4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530059 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530068 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerStarted","Data":"9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16622824-15d7-4ff1-8eac-85fe5d8da9db","Type":"ContainerStarted","Data":"41fe440b607b3e581c040bce88540e4c075d11a1b13113192d67d650946ced0f"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530095 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerStarted","Data":"ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530124 4740 scope.go:117] "RemoveContainer" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.535739 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.555699 4740 scope.go:117] "RemoveContainer" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.594728 4740 scope.go:117] "RemoveContainer" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.595296 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": container with ID starting with 0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2 not found: ID does not exist" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595358 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} err="failed to get container status \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": rpc error: code = NotFound desc = could not find container \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": container with ID starting with 0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2 not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595409 4740 scope.go:117] "RemoveContainer" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.595768 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": container with ID starting with 3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f not found: ID does not exist" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595796 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f"} err="failed to get container status \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": rpc error: code = NotFound desc = could not find container \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": container with ID starting with 3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595829 4740 scope.go:117] "RemoveContainer" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.637616 4740 scope.go:117] "RemoveContainer" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.672080 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.698880 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.711685 4740 scope.go:117] "RemoveContainer" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.715925 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": container with ID starting with 70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c not found: ID does not exist" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.716074 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} err="failed to get container status \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": rpc error: code = NotFound desc = could not find container \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": container with ID starting with 70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.716163 4740 scope.go:117] "RemoveContainer" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.718084 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": container with ID starting with cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f not found: ID does not exist" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.718198 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f"} err="failed to get container status \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": rpc error: code = NotFound desc = could not find container \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": container with ID starting with cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.757207 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.766975 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.413496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerStarted","Data":"393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.413581 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.415758 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b2536c4-0b82-4b42-9fe3-20237884d803" containerID="216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.415832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerDied","Data":"216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.419223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"53fa5ab99d61461f0532c2a5ac09966c06c0389b14884e3296fd140f78453484"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.422962 4740 generic.go:334] "Generic (PLEG): container finished" podID="92418e50-20f2-495c-9b06-963a5cd506d1" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.423096 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.431091 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.217727926 podStartE2EDuration="25.431071289s" podCreationTimestamp="2026-02-16 13:08:15 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.516873097 +0000 UTC m=+936.893221808" lastFinishedPulling="2026-02-16 13:08:38.73021645 +0000 UTC m=+946.106565171" observedRunningTime="2026-02-16 13:08:40.430323095 +0000 UTC m=+947.806671866" watchObservedRunningTime="2026-02-16 13:08:40.431071289 +0000 UTC m=+947.807420010" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.432896 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.433083 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.438338 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.441711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"d9d1e853b1d7660fcf74e6807238d3d53eb29720fa3698242837712fb1a7222b"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.445010 4740 generic.go:334] "Generic (PLEG): container finished" podID="9909a57b-336c-4687-855f-495a78d21af7" containerID="da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.445091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.446695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79" event={"ID":"04335a5d-7cac-4a47-982c-70cae9db69ff","Type":"ContainerStarted","Data":"7532e37515439a8c65a2865febec5f9223ab31c4950f64ee6aaa622be3c70b96"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.447241 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.448717 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.448765 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.513252 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qnt79" podStartSLOduration=14.575198163 podStartE2EDuration="22.513230205s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.760102651 +0000 UTC m=+937.136451372" lastFinishedPulling="2026-02-16 13:08:37.698134693 +0000 UTC m=+945.074483414" observedRunningTime="2026-02-16 13:08:40.505429639 +0000 UTC m=+947.881778390" watchObservedRunningTime="2026-02-16 13:08:40.513230205 +0000 UTC m=+947.889578936" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.293298 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" path="/var/lib/kubelet/pods/23232c7f-b058-4eec-850d-b28aecf39a2f/volumes" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.294354 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" path="/var/lib/kubelet/pods/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8/volumes" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.458659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.463680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"92ed8039da0d38e6f99d32d89d6cf65a131922ccb0a744b0a5e702afc7d26096"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.486961 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"eab27ce5945b5192b4ad81108f9607e53e3ef1f7924c8c2a91b27de1c6d9272d"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.501554 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skhl7" podStartSLOduration=10.154693358 podStartE2EDuration="18.501533673s" podCreationTimestamp="2026-02-16 13:08:23 +0000 UTC" firstStartedPulling="2026-02-16 13:08:32.837040332 +0000 UTC m=+940.213389063" lastFinishedPulling="2026-02-16 13:08:41.183880647 +0000 UTC m=+948.560229378" observedRunningTime="2026-02-16 13:08:41.486650664 +0000 UTC m=+948.862999385" watchObservedRunningTime="2026-02-16 13:08:41.501533673 +0000 UTC m=+948.877882394" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.504879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b4j4m" event={"ID":"ad1b2300-a42b-4a99-b186-7661bb410a36","Type":"ContainerStarted","Data":"ec2555abb5b3978f7b8ce2a60bbc64687f783d531491b0850e6af9047c2fedaa"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.514091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerStarted","Data":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.514320 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.432316983 podStartE2EDuration="23.514299004s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.852590081 +0000 UTC m=+937.228938802" lastFinishedPulling="2026-02-16 13:08:40.934572102 +0000 UTC m=+948.310920823" observedRunningTime="2026-02-16 13:08:41.512163247 +0000 UTC m=+948.888511968" watchObservedRunningTime="2026-02-16 13:08:41.514299004 +0000 UTC m=+948.890647725" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.520169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"d4fd34567da2277d7ec91c02b54038023b063a0cd5104da46f6104a0ca4de543"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.522599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerStarted","Data":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.523907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.543753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerStarted","Data":"1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.543826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.562570 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b4j4m" podStartSLOduration=8.231317947 podStartE2EDuration="10.562547053s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="2026-02-16 13:08:38.599096933 +0000 UTC m=+945.975445654" lastFinishedPulling="2026-02-16 13:08:40.930326039 +0000 UTC m=+948.306674760" observedRunningTime="2026-02-16 13:08:41.558727382 +0000 UTC m=+948.935076103" watchObservedRunningTime="2026-02-16 13:08:41.562547053 +0000 UTC m=+948.938895774" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.584220 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podStartSLOduration=10.584204624 podStartE2EDuration="10.584204624s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:41.583319547 +0000 UTC m=+948.959668278" watchObservedRunningTime="2026-02-16 13:08:41.584204624 +0000 UTC m=+948.960553345" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.623474 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48mhx" podStartSLOduration=19.504143029 podStartE2EDuration="29.623452229s" podCreationTimestamp="2026-02-16 13:08:12 +0000 UTC" firstStartedPulling="2026-02-16 13:08:30.297388897 +0000 UTC m=+937.673737618" lastFinishedPulling="2026-02-16 13:08:40.416698087 +0000 UTC m=+947.793046818" observedRunningTime="2026-02-16 13:08:41.606571658 +0000 UTC m=+948.982920389" watchObservedRunningTime="2026-02-16 13:08:41.623452229 +0000 UTC m=+948.999800950" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.635613 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-76rjc" podStartSLOduration=10.635593661 podStartE2EDuration="10.635593661s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:41.633529827 +0000 UTC m=+949.009878548" watchObservedRunningTime="2026-02-16 13:08:41.635593661 +0000 UTC m=+949.011942382" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.663583 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.203487053 podStartE2EDuration="20.663562811s" podCreationTimestamp="2026-02-16 13:08:21 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.491644354 +0000 UTC m=+936.867993085" lastFinishedPulling="2026-02-16 13:08:40.951720122 +0000 UTC m=+948.328068843" observedRunningTime="2026-02-16 13:08:41.662722885 +0000 UTC m=+949.039071606" watchObservedRunningTime="2026-02-16 13:08:41.663562811 +0000 UTC m=+949.039911532" Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.397130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.551583 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1" exitCode=0 Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.551686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1"} Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.557289 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"1865a9e30345472085877099f6bae95223a69be3c4d585be9e24425096b09e26"} Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.600988 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-crblj" podStartSLOduration=17.253447888 podStartE2EDuration="24.600961988s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.995086445 +0000 UTC m=+937.371435176" lastFinishedPulling="2026-02-16 13:08:37.342600555 +0000 UTC m=+944.718949276" observedRunningTime="2026-02-16 13:08:42.596391775 +0000 UTC m=+949.972740506" watchObservedRunningTime="2026-02-16 13:08:42.600961988 +0000 UTC m=+949.977310719" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.159081 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.299489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.299540 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.348731 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.397650 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.436585 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerStarted","Data":"9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928"} Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566612 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.585386 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqc4t" podStartSLOduration=3.442635229 podStartE2EDuration="5.585367963s" podCreationTimestamp="2026-02-16 13:08:38 +0000 UTC" firstStartedPulling="2026-02-16 13:08:40.791361856 +0000 UTC m=+948.167710567" lastFinishedPulling="2026-02-16 13:08:42.93409458 +0000 UTC m=+950.310443301" observedRunningTime="2026-02-16 13:08:43.583407112 +0000 UTC m=+950.959755833" watchObservedRunningTime="2026-02-16 13:08:43.585367963 +0000 UTC m=+950.961716684" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.634441 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.679686 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.742935 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.742993 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.573106 4740 generic.go:334] "Generic (PLEG): container finished" podID="0edd2079-790d-4061-aaf4-4213fe6adc7a" containerID="657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28" exitCode=0 Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.573158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerDied","Data":"657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28"} Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.586596 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b2a3679-b8ef-4221-a9f6-ccd863696aa8" containerID="62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8" exitCode=0 Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.587359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerDied","Data":"62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8"} Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.588055 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.666476 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.721276 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.797432 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skhl7" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" probeResult="failure" output=< Feb 16 13:08:44 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:08:44 crc kubenswrapper[4740]: > Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.968801 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969168 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969185 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969211 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969217 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969233 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969240 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969257 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969262 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969418 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.971481 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.973439 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.973704 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.974042 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-x7z9r" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.974096 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.020754 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057504 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159189 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159254 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159292 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.163063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.167667 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.168176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.192055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.195982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.201483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.207700 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.292662 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.518339 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.606978 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.607246 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" containerID="cri-o://1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" gracePeriod=10 Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.621800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"4ab60d1f2e380b2a88dbc22d2539c75daba3b4561b478d6bd7f1bc4cc49af524"} Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.675870 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.677502 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.680116 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.534704061 podStartE2EDuration="35.680097168s" podCreationTimestamp="2026-02-16 13:08:10 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.508660429 +0000 UTC m=+936.885009140" lastFinishedPulling="2026-02-16 13:08:37.654053526 +0000 UTC m=+945.030402247" observedRunningTime="2026-02-16 13:08:45.675122101 +0000 UTC m=+953.051470832" watchObservedRunningTime="2026-02-16 13:08:45.680097168 +0000 UTC m=+953.056445879" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.788701 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884241 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986198 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986246 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986289 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.006071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.059749 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: W0216 13:08:46.065447 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4f80435_6b1f_45e1_bc0c_ff150bd3b33b.slice/crio-a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb WatchSource:0}: Error finding container a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb: Status 404 returned error can't find the container with id a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.085846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.522650 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:46 crc kubenswrapper[4740]: W0216 13:08:46.530871 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb414a4c4_7799_4c49_9aa9_5718c2e5855f.slice/crio-80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70 WatchSource:0}: Error finding container 80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70: Status 404 returned error can't find the container with id 80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70 Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.622852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerStarted","Data":"80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.624621 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.626938 4740 generic.go:334] "Generic (PLEG): container finished" podID="9909a57b-336c-4687-855f-495a78d21af7" containerID="1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" exitCode=0 Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.627016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.693050 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.699106 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701442 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zgr9s" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701727 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701999 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.702177 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.719999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798775 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900501 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900827 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900872 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900961 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:47.400939423 +0000 UTC m=+954.777288144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901064 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901938 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.902203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.902247 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.907055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.933332 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.934259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.024653 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.261083 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.383436 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.384627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.387576 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.387964 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.388159 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.393902 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.415283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416693 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416751 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416852 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:48.416786935 +0000 UTC m=+955.793135666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517126 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517200 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517227 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517255 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619376 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619396 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.620206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.620829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.621130 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.625201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.625504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.628189 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.650629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.677878 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.679940 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.719774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.720918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.720964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.721097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.730944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.826771 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.862801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.105369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.211275 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.377770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.378910 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.439617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.439961 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.439981 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.440040 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:50.440021773 +0000 UTC m=+957.816370494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.468430 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.575801 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642657 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642730 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642845 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642894 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.659129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9" (OuterVolumeSpecName: "kube-api-access-sdxj9") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "kube-api-access-sdxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.695211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"21f7cc3c3fdf74d8fbf24eb65a92a49c96366aae04d5d936ec32816f74b0afe3"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.696041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config" (OuterVolumeSpecName: "config") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.697987 4740 generic.go:334] "Generic (PLEG): container finished" podID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" exitCode=0 Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.698075 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707239 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707355 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707416 4740 scope.go:117] "RemoveContainer" containerID="1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.711562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerStarted","Data":"d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.713026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.727781 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.728441 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.896893884 podStartE2EDuration="37.728423518s" podCreationTimestamp="2026-02-16 13:08:11 +0000 UTC" firstStartedPulling="2026-02-16 13:08:27.458200478 +0000 UTC m=+934.834549199" lastFinishedPulling="2026-02-16 13:08:37.289730102 +0000 UTC m=+944.666078833" observedRunningTime="2026-02-16 13:08:48.720895011 +0000 UTC m=+956.097243742" watchObservedRunningTime="2026-02-16 13:08:48.728423518 +0000 UTC m=+956.104772229" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.728551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.750014 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.753322 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.754722 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.755326 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.774045 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.048868 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.056026 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.059118 4740 scope.go:117] "RemoveContainer" containerID="da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.300491 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9909a57b-336c-4687-855f-495a78d21af7" path="/var/lib/kubelet/pods/9909a57b-336c-4687-855f-495a78d21af7/volumes" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.720251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerStarted","Data":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.721938 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.723825 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"58790eae7c5c223749a5954ba961d9322038e0165accfb474f5b3b920e9e468e"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.723946 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"ae0c5c22153eb033a01b4580865b4192b26c0d7dea01338d02f3551ba3861103"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.725068 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.730390 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" exitCode=0 Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.731440 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.731506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerStarted","Data":"674fc74e32c41001843c95fbf70084ec6f8d9862901843c0b561d5e3afe04969"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.748275 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podStartSLOduration=4.748258199 podStartE2EDuration="4.748258199s" podCreationTimestamp="2026-02-16 13:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:49.743788298 +0000 UTC m=+957.120137029" watchObservedRunningTime="2026-02-16 13:08:49.748258199 +0000 UTC m=+957.124606920" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.792663 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7654319689999998 podStartE2EDuration="5.792644825s" podCreationTimestamp="2026-02-16 13:08:44 +0000 UTC" firstStartedPulling="2026-02-16 13:08:46.068274252 +0000 UTC m=+953.444622973" lastFinishedPulling="2026-02-16 13:08:49.095487108 +0000 UTC m=+956.471835829" observedRunningTime="2026-02-16 13:08:49.785951785 +0000 UTC m=+957.162300506" watchObservedRunningTime="2026-02-16 13:08:49.792644825 +0000 UTC m=+957.168993546" Feb 16 13:08:50 crc kubenswrapper[4740]: I0216 13:08:50.494610 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.494821 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.495037 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.495101 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:54.495082948 +0000 UTC m=+961.871431659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.652394 4740 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.147:34142->38.102.83.147:36137: read tcp 38.102.83.147:34142->38.102.83.147:36137: read: connection reset by peer Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.416623 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.726240 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.726294 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.751798 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqc4t" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" containerID="cri-o://9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" gracePeriod=2 Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.825289 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.923367 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 13:08:52 crc kubenswrapper[4740]: I0216 13:08:52.759737 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" exitCode=0 Feb 16 13:08:52 crc kubenswrapper[4740]: I0216 13:08:52.759846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928"} Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.218338 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.218690 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.324489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.354243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.424445 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:53 crc kubenswrapper[4740]: E0216 13:08:53.425869 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="init" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.425945 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="init" Feb 16 13:08:53 crc kubenswrapper[4740]: E0216 13:08:53.426016 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426069 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426330 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426974 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.434736 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.440911 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.441078 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.477638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.478781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.480683 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.491147 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542100 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542477 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.543684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.570825 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.644509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.644915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.645496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.673391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.758773 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.789502 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.800858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.835289 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.844295 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.049306 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.050695 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.056438 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.116730 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.117731 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.122765 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.131251 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.157996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158306 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.259976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260050 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.261199 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.280548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.281677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.318003 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.319067 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.332599 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.361136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.361185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.374131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.433629 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.439829 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.441171 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.443554 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.463619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464248 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464377 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.465438 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.486405 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.505903 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566009 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566032 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566091 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:09:02.566075619 +0000 UTC m=+969.942424330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.566643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.587527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.666575 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.771896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.873779 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973052 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.977672 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities" (OuterVolumeSpecName: "utilities") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.988017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz" (OuterVolumeSpecName: "kube-api-access-mfwsz") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "kube-api-access-mfwsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.058590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.075980 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.076023 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.076042 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.209865 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:55 crc kubenswrapper[4740]: W0216 13:08:55.334165 4740 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/pids.max: no such device Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.382135 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.400900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.413295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:55 crc kubenswrapper[4740]: E0216 13:08:55.557333 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/crio-6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85\": RecentStats: unable to find data in memory cache]" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.565206 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.591973 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.800728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerStarted","Data":"4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.800780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerStarted","Data":"e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.804252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerStarted","Data":"a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.804293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerStarted","Data":"94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.810721 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerStarted","Data":"b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.822860 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7989-account-create-update-s6gss" podStartSLOduration=1.822844905 podStartE2EDuration="1.822844905s" podCreationTimestamp="2026-02-16 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.815306018 +0000 UTC m=+963.191654739" watchObservedRunningTime="2026-02-16 13:08:55.822844905 +0000 UTC m=+963.199193626" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825783 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825877 4740 scope.go:117] "RemoveContainer" containerID="9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825999 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.828109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerStarted","Data":"ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834005 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" exitCode=0 Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834060 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-nxmdt" podStartSLOduration=2.834049517 podStartE2EDuration="2.834049517s" podCreationTimestamp="2026-02-16 13:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.831474096 +0000 UTC m=+963.207822837" watchObservedRunningTime="2026-02-16 13:08:55.834049517 +0000 UTC m=+963.210398238" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.846708 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerStarted","Data":"56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.849040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerStarted","Data":"90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.849071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerStarted","Data":"f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.852649 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4rgvg" podStartSLOduration=2.467683 podStartE2EDuration="8.852634943s" podCreationTimestamp="2026-02-16 13:08:47 +0000 UTC" firstStartedPulling="2026-02-16 13:08:48.275029741 +0000 UTC m=+955.651378462" lastFinishedPulling="2026-02-16 13:08:54.659981684 +0000 UTC m=+962.036330405" observedRunningTime="2026-02-16 13:08:55.846942314 +0000 UTC m=+963.223291035" watchObservedRunningTime="2026-02-16 13:08:55.852634943 +0000 UTC m=+963.228983654" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.854095 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerStarted","Data":"494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.854171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerStarted","Data":"47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.855488 4740 scope.go:117] "RemoveContainer" containerID="d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.881707 4740 scope.go:117] "RemoveContainer" containerID="6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.891056 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8cb8-account-create-update-dgv8s" podStartSLOduration=2.8910382610000003 podStartE2EDuration="2.891038261s" podCreationTimestamp="2026-02-16 13:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.89068213 +0000 UTC m=+963.267030851" watchObservedRunningTime="2026-02-16 13:08:55.891038261 +0000 UTC m=+963.267386982" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.912200 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9mvdt" podStartSLOduration=1.912179596 podStartE2EDuration="1.912179596s" podCreationTimestamp="2026-02-16 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.907454327 +0000 UTC m=+963.283803048" watchObservedRunningTime="2026-02-16 13:08:55.912179596 +0000 UTC m=+963.288528317" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.932280 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.935742 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.021145 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.021641 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48mhx" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" containerID="cri-o://7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" gracePeriod=2 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.088096 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.147032 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.147589 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-76rjc" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" containerID="cri-o://8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" gracePeriod=10 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.221078 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.221303 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skhl7" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" containerID="cri-o://23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" gracePeriod=2 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.618954 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.732901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.732947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.733024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.734469 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities" (OuterVolumeSpecName: "utilities") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.742012 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8" (OuterVolumeSpecName: "kube-api-access-gxgc8") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "kube-api-access-gxgc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.796188 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835208 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835240 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835254 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.866136 4740 generic.go:334] "Generic (PLEG): container finished" podID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerID="494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.866429 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerDied","Data":"494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.870088 4740 generic.go:334] "Generic (PLEG): container finished" podID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerID="4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.870147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerDied","Data":"4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.872282 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.872344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.874430 4740 generic.go:334] "Generic (PLEG): container finished" podID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerID="373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.874497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerDied","Data":"373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.877238 4740 generic.go:334] "Generic (PLEG): container finished" podID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerID="90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.877299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerDied","Data":"90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.881601 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.896863 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.896914 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.897010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.898002 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"e25ea221b5fa4528f6319f69abf2088a3814b82a1e688ade98fa8da437436a8d"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.898046 4740 scope.go:117] "RemoveContainer" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.900554 4740 generic.go:334] "Generic (PLEG): container finished" podID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerID="a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.900620 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerDied","Data":"a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938398 4740 generic.go:334] "Generic (PLEG): container finished" podID="92418e50-20f2-495c-9b06-963a5cd506d1" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938498 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938665 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.948573 4740 generic.go:334] "Generic (PLEG): container finished" podID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerID="baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.954057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerDied","Data":"baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.965626 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.971077 4740 scope.go:117] "RemoveContainer" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.025988 4740 scope.go:117] "RemoveContainer" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.053774 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054162 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054480 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054616 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.070234 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd" (OuterVolumeSpecName: "kube-api-access-gndhd") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "kube-api-access-gndhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.071379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities" (OuterVolumeSpecName: "utilities") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.076034 4740 scope.go:117] "RemoveContainer" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.081204 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": container with ID starting with 7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37 not found: ID does not exist" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.081486 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} err="failed to get container status \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": rpc error: code = NotFound desc = could not find container \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": container with ID starting with 7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.081631 4740 scope.go:117] "RemoveContainer" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.082563 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": container with ID starting with ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7 not found: ID does not exist" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.082635 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7"} err="failed to get container status \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": rpc error: code = NotFound desc = could not find container \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": container with ID starting with ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.082661 4740 scope.go:117] "RemoveContainer" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.083331 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": container with ID starting with 72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9 not found: ID does not exist" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.083373 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9"} err="failed to get container status \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": rpc error: code = NotFound desc = could not find container \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": container with ID starting with 72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.083390 4740 scope.go:117] "RemoveContainer" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.085098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl" (OuterVolumeSpecName: "kube-api-access-ww4jl") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "kube-api-access-ww4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.105970 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.113152 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.125303 4740 scope.go:117] "RemoveContainer" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.130910 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config" (OuterVolumeSpecName: "config") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.138994 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.142545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.150866 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156136 4740 scope.go:117] "RemoveContainer" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.156606 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": container with ID starting with 8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a not found: ID does not exist" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156658 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} err="failed to get container status \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": rpc error: code = NotFound desc = could not find container \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": container with ID starting with 8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156691 4740 scope.go:117] "RemoveContainer" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157047 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157077 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157086 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.157020 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": container with ID starting with 6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302 not found: ID does not exist" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157126 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302"} err="failed to get container status \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": rpc error: code = NotFound desc = could not find container \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": container with ID starting with 6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157098 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157515 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157546 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157573 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.230667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.258570 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.277483 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.290610 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" path="/var/lib/kubelet/pods/1e068ce5-e7a1-430c-97f7-fed550912288/volumes" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.291260 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" path="/var/lib/kubelet/pods/dde0147a-01d8-430b-a230-9d8bdfffeadd/volumes" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.291970 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.959729 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.959786 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"1fd3890eb822343ee419a082a86d7c0f7f37da9e46f4c355fdf04fe11a7d6219"} Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.961111 4740 scope.go:117] "RemoveContainer" containerID="23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.965600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerStarted","Data":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.998912 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.003882 4740 scope.go:117] "RemoveContainer" containerID="c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.004444 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.025989 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mnb45" podStartSLOduration=3.861798757 podStartE2EDuration="11.025967969s" podCreationTimestamp="2026-02-16 13:08:47 +0000 UTC" firstStartedPulling="2026-02-16 13:08:49.732686829 +0000 UTC m=+957.109035550" lastFinishedPulling="2026-02-16 13:08:56.896856041 +0000 UTC m=+964.273204762" observedRunningTime="2026-02-16 13:08:58.019734163 +0000 UTC m=+965.396082904" watchObservedRunningTime="2026-02-16 13:08:58.025967969 +0000 UTC m=+965.402316690" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.042719 4740 scope.go:117] "RemoveContainer" containerID="34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.105791 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.106164 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.349803 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.378404 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"14c97501-5a5c-4e03-8e50-cf7422806c32\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.378562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"14c97501-5a5c-4e03-8e50-cf7422806c32\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.379374 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14c97501-5a5c-4e03-8e50-cf7422806c32" (UID: "14c97501-5a5c-4e03-8e50-cf7422806c32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.389035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657" (OuterVolumeSpecName: "kube-api-access-j2657") pod "14c97501-5a5c-4e03-8e50-cf7422806c32" (UID: "14c97501-5a5c-4e03-8e50-cf7422806c32"). InnerVolumeSpecName "kube-api-access-j2657". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.481689 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.482090 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.742251 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.747627 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.772452 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790604 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790667 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790721 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"5b945754-b567-43e9-a84a-4e0ea95900e7\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790910 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"5b945754-b567-43e9-a84a-4e0ea95900e7\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791006 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb88b05d-b7b7-4a08-847c-5e8d5cc98477" (UID: "bb88b05d-b7b7-4a08-847c-5e8d5cc98477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791325 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4996abf8-6c4b-42d0-99f2-aeacf2fd5591" (UID: "4996abf8-6c4b-42d0-99f2-aeacf2fd5591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.792285 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b945754-b567-43e9-a84a-4e0ea95900e7" (UID: "5b945754-b567-43e9-a84a-4e0ea95900e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.796172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf" (OuterVolumeSpecName: "kube-api-access-9s2vf") pod "bb88b05d-b7b7-4a08-847c-5e8d5cc98477" (UID: "bb88b05d-b7b7-4a08-847c-5e8d5cc98477"). InnerVolumeSpecName "kube-api-access-9s2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.796247 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.801539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k" (OuterVolumeSpecName: "kube-api-access-x8d7k") pod "4996abf8-6c4b-42d0-99f2-aeacf2fd5591" (UID: "4996abf8-6c4b-42d0-99f2-aeacf2fd5591"). InnerVolumeSpecName "kube-api-access-x8d7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.816135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt" (OuterVolumeSpecName: "kube-api-access-ss7vt") pod "5b945754-b567-43e9-a84a-4e0ea95900e7" (UID: "5b945754-b567-43e9-a84a-4e0ea95900e7"). InnerVolumeSpecName "kube-api-access-ss7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.888633 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892237 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"b12e494a-5467-4264-a0e5-2596c61b4a73\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892362 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"b12e494a-5467-4264-a0e5-2596c61b4a73\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892664 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12e494a-5467-4264-a0e5-2596c61b4a73" (UID: "b12e494a-5467-4264-a0e5-2596c61b4a73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892676 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892788 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892803 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892827 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892836 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.896925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r" (OuterVolumeSpecName: "kube-api-access-b5q7r") pod "b12e494a-5467-4264-a0e5-2596c61b4a73" (UID: "b12e494a-5467-4264-a0e5-2596c61b4a73"). InnerVolumeSpecName "kube-api-access-b5q7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerDied","Data":"56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973756 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973773 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerDied","Data":"f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975207 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975244 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.976908 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.976911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerDied","Data":"47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.977076 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerDied","Data":"e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983098 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983160 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.993550 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.993588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994374 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerDied","Data":"94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994414 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994450 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994766 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59544dcd-0bd1-4b5f-abf6-9ab972168af0" (UID: "59544dcd-0bd1-4b5f-abf6-9ab972168af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996837 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996859 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996869 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.997178 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66" (OuterVolumeSpecName: "kube-api-access-t9c66") pod "59544dcd-0bd1-4b5f-abf6-9ab972168af0" (UID: "59544dcd-0bd1-4b5f-abf6-9ab972168af0"). InnerVolumeSpecName "kube-api-access-t9c66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998826 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerDied","Data":"ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998868 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998874 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.098520 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.181627 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mnb45" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" probeResult="failure" output=< Feb 16 13:08:59 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:08:59 crc kubenswrapper[4740]: > Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.292234 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" path="/var/lib/kubelet/pods/92418e50-20f2-495c-9b06-963a5cd506d1/volumes" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.292768 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" path="/var/lib/kubelet/pods/aca31aa1-429e-4f65-acd5-8896734d0713/volumes" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.335362 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336295 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336312 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336332 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336339 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336350 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336359 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336371 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336377 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336388 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336395 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336402 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="init" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336408 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="init" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336418 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336424 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336434 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336443 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336454 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336471 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336479 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336489 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336509 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336523 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336529 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336546 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336553 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336565 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336570 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336581 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336587 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336597 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336603 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336614 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336620 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336789 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336801 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336823 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336835 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336843 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336852 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336861 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336872 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336879 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336888 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.337560 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.344579 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.362108 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.420544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.420592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.521905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.521953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.522614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.541004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.657710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:01 crc kubenswrapper[4740]: I0216 13:09:01.088394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023198 4740 generic.go:334] "Generic (PLEG): container finished" podID="36527dd8-2945-4976-894f-67360343ae7d" containerID="538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5" exitCode=0 Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerDied","Data":"538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5"} Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023444 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerStarted","Data":"cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7"} Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.570726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.579517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.626618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.032444 4740 generic.go:334] "Generic (PLEG): container finished" podID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.032505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.036861 4740 generic.go:334] "Generic (PLEG): container finished" podID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerID="b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.036933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerDied","Data":"b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.039083 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.039235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.211509 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.396554 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.486869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"36527dd8-2945-4976-894f-67360343ae7d\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.487437 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"36527dd8-2945-4976-894f-67360343ae7d\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.488174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36527dd8-2945-4976-894f-67360343ae7d" (UID: "36527dd8-2945-4976-894f-67360343ae7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.492379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6" (OuterVolumeSpecName: "kube-api-access-4qjz6") pod "36527dd8-2945-4976-894f-67360343ae7d" (UID: "36527dd8-2945-4976-894f-67360343ae7d"). InnerVolumeSpecName "kube-api-access-4qjz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.588780 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.588829 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.669518 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:03 crc kubenswrapper[4740]: E0216 13:09:03.671328 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.671354 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.671581 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.672192 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.674361 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.675039 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nblft" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.704960 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794785 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.798800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.798930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.799746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.809111 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.994579 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.056302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.056700 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.059463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.059852 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.062209 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"8ddaaa78282064690f9c617f5bb12f22b7b429edae49a32326aa15776d83be01"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerDied","Data":"cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065504 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.104988 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.9875025 podStartE2EDuration="56.104972465s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:10.693876652 +0000 UTC m=+918.070225373" lastFinishedPulling="2026-02-16 13:08:28.811346617 +0000 UTC m=+936.187695338" observedRunningTime="2026-02-16 13:09:04.104603273 +0000 UTC m=+971.480952014" watchObservedRunningTime="2026-02-16 13:09:04.104972465 +0000 UTC m=+971.481321196" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.113377 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.74523276 podStartE2EDuration="55.113321398s" podCreationTimestamp="2026-02-16 13:08:09 +0000 UTC" firstStartedPulling="2026-02-16 13:08:18.420702959 +0000 UTC m=+925.797051680" lastFinishedPulling="2026-02-16 13:08:28.788791597 +0000 UTC m=+936.165140318" observedRunningTime="2026-02-16 13:09:04.081115714 +0000 UTC m=+971.457464445" watchObservedRunningTime="2026-02-16 13:09:04.113321398 +0000 UTC m=+971.489670119" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.426297 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.608900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609111 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609128 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609187 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609236 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609266 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.610946 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.619324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.653129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn" (OuterVolumeSpecName: "kube-api-access-mp2kn") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "kube-api-access-mp2kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.662042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.683449 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts" (OuterVolumeSpecName: "scripts") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.706943 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.712032 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713451 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713468 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713478 4740 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713486 4740 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713496 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713504 4740 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713511 4740 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084287 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"ff664a449c345e69480ecdd2fda5b555b2b764f1bf8a046ae3b40f3be204b904"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084643 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"101dc671806cd9be004e923bcece9a145bc3add4f1371138a82d2c99d82ff056"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"b164356a1354c53ba3a298e2db8bec44018c009ecca15e734fa39e5378fa2c61"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerDied","Data":"d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087945 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.092612 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:05 crc kubenswrapper[4740]: W0216 13:09:05.102564 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf092c8c4_9a32_4093_9a5c_bc5fd05d600e.slice/crio-b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199 WatchSource:0}: Error finding container b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199: Status 404 returned error can't find the container with id b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199 Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.376284 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.105286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"1870062db4f02aa7ef673e04c92569d0bc2d9bfee4f609444d1c156208e1c246"} Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.109831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerStarted","Data":"b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199"} Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.528866 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.540014 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124122 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"59d94ed00a856d6a01660445439abcfac2f74397a922657ef2bcf79a5827089f"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124178 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"d25968fa30d1a53225f2b0022ad6c65d520c6f475bc8a6a088d2c9e430ed25a4"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"bbac753e3b96f5ff361d427b75a7fb31e2141fe41ff62be2273363686456aa5c"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.290635 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36527dd8-2945-4976-894f-67360343ae7d" path="/var/lib/kubelet/pods/36527dd8-2945-4976-894f-67360343ae7d/volumes" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.141263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"9eb7886d68d4fb7b23f87ebcde6313d42d4df7456bf82bc6673f9b37546b3e94"} Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.184047 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.254075 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.427194 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157333 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"89ee86072a2aa5aa769dbb3fc8af0dedae416012327b0da0f671dc29d24efdd4"} Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157615 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"b9e49c34b73f2b8894798822aec411ad86ebcdb3331908430d8272079117e6b5"} Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157633 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"f3f21ed5de91058db3263fc496a34bc22e2b441331fd66b54b4bc9cead990032"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.205247 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mnb45" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" containerID="cri-o://4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" gracePeriod=2 Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207696 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"8eac3dd791f97f1afd7163643c0e23e5b0a7f54c577caea8cb7f938a36e5839f"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"d360d681d9c438fcaa27acba9343951212b1e02544de7c1e1304e90ef2da75e9"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"52696d4a91bfa111df96105ddf56f0e0e399ae9a00207df032fcb4ac13e67d4e"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"36591053a2d7c2702b22c3072c1417e32b3dc91bddf0760490ffdaeb908b37f9"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.274671 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.130362359 podStartE2EDuration="25.274649433s" podCreationTimestamp="2026-02-16 13:08:45 +0000 UTC" firstStartedPulling="2026-02-16 13:09:03.232558502 +0000 UTC m=+970.608907223" lastFinishedPulling="2026-02-16 13:09:08.376845566 +0000 UTC m=+975.753194297" observedRunningTime="2026-02-16 13:09:10.26629469 +0000 UTC m=+977.642643431" watchObservedRunningTime="2026-02-16 13:09:10.274649433 +0000 UTC m=+977.650998154" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.608427 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:10 crc kubenswrapper[4740]: E0216 13:09:10.608868 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.608886 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.609094 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.610056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.623150 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.627794 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736866 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736939 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.812467 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839115 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.840266 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.840381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.842386 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.842715 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.843081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.887916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.936019 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.940928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.941001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.941124 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.942262 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities" (OuterVolumeSpecName: "utilities") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.950052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm" (OuterVolumeSpecName: "kube-api-access-l4zjm") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "kube-api-access-l4zjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.970567 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043008 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043043 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043052 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216533 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" exitCode=0 Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216629 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"674fc74e32c41001843c95fbf70084ec6f8d9862901843c0b561d5e3afe04969"} Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216748 4740 scope.go:117] "RemoveContainer" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216645 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.256033 4740 scope.go:117] "RemoveContainer" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.258520 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.265672 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.277217 4740 scope.go:117] "RemoveContainer" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.294207 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" path="/var/lib/kubelet/pods/9e55b787-ebf2-405e-b1ef-545e0afe08b7/volumes" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.295121 4740 scope.go:117] "RemoveContainer" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.296118 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": container with ID starting with 4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a not found: ID does not exist" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296164 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} err="failed to get container status \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": rpc error: code = NotFound desc = could not find container \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": container with ID starting with 4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296189 4740 scope.go:117] "RemoveContainer" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.296687 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": container with ID starting with 5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c not found: ID does not exist" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296706 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c"} err="failed to get container status \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": rpc error: code = NotFound desc = could not find container \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": container with ID starting with 5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296736 4740 scope.go:117] "RemoveContainer" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.297443 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": container with ID starting with 31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081 not found: ID does not exist" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.297464 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081"} err="failed to get container status \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": rpc error: code = NotFound desc = could not find container \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": container with ID starting with 31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081 not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.405579 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:11 crc kubenswrapper[4740]: W0216 13:09:11.420361 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56781f2b_b49d_4234_981b_a01a10dfab05.slice/crio-07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf WatchSource:0}: Error finding container 07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf: Status 404 returned error can't find the container with id 07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.547642 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-utilities" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-utilities" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548101 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548110 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548128 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-content" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548135 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-content" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548337 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.549068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.561205 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.570031 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.657742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.657876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.758882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.759007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.759772 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.785915 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.904372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.232647 4740 generic.go:334] "Generic (PLEG): container finished" podID="56781f2b-b49d-4234-981b-a01a10dfab05" containerID="96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91" exitCode=0 Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.232985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91"} Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.233007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerStarted","Data":"07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf"} Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.364708 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.862827 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qnt79" podUID="04335a5d-7cac-4a47-982c-70cae9db69ff" containerName="ovn-controller" probeResult="failure" output=< Feb 16 13:09:13 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 13:09:13 crc kubenswrapper[4740]: > Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.895788 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.912571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.145663 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.146965 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.149837 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.176152 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304166 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304193 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304227 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406347 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.408070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.409001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.411963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.412059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.412106 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.428367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.469072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:18 crc kubenswrapper[4740]: I0216 13:09:18.882236 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qnt79" podUID="04335a5d-7cac-4a47-982c-70cae9db69ff" containerName="ovn-controller" probeResult="failure" output=< Feb 16 13:09:18 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 13:09:18 crc kubenswrapper[4740]: > Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.097030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.470101 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.528740 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.530133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.539318 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.540215 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.541422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.553336 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.589241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635449 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737922 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.739046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.771981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.775405 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.802011 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.803418 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.808661 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.810095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.811850 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.816752 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.825957 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.872610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.883800 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.893490 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.894664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.897401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.901428 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.901650 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.910123 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.911479 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.912565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.927975 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939643 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939693 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939728 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.948416 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.008596 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.009582 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.011980 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.028919 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.040960 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041138 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041158 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041194 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041265 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.042366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.042796 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.059053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.090479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143421 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143452 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.144125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.146333 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.146611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.153438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.159300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.162249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.166643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.218192 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.239516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.245288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.245332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.246192 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.261421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.325711 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:22 crc kubenswrapper[4740]: W0216 13:09:22.080077 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15147587_626f_4577_b5af_b8f574f60152.slice/crio-49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e WatchSource:0}: Error finding container 49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e: Status 404 returned error can't find the container with id 49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.207288 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.209134 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkb5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-7lg27_openstack(f092c8c4-9a32-4093-9a5c-bc5fd05d600e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.210472 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-7lg27" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" Feb 16 13:09:22 crc kubenswrapper[4740]: I0216 13:09:22.390333 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerStarted","Data":"49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e"} Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.479233 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-7lg27" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" Feb 16 13:09:22 crc kubenswrapper[4740]: I0216 13:09:22.796885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:22 crc kubenswrapper[4740]: W0216 13:09:22.857098 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052d2ebf_cf79_4395_b125_d955d8144cef.slice/crio-785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222 WatchSource:0}: Error finding container 785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222: Status 404 returned error can't find the container with id 785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.018174 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.188731 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.211391 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.227556 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:23 crc kubenswrapper[4740]: W0216 13:09:23.262038 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5296850e_63c0_4801_bff8_bc5213555f58.slice/crio-1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956 WatchSource:0}: Error finding container 1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956: Status 404 returned error can't find the container with id 1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.408758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerStarted","Data":"29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.411753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerStarted","Data":"2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.411794 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerStarted","Data":"959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.422120 4740 generic.go:334] "Generic (PLEG): container finished" podID="15147587-626f-4577-b5af-b8f574f60152" containerID="cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94" exitCode=0 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.422177 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerDied","Data":"cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.425292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerStarted","Data":"5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013"} Feb 16 13:09:23 crc kubenswrapper[4740]: W0216 13:09:23.430032 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634925bb_5381_4298_a256_447ef56a2f2a.slice/crio-00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed WatchSource:0}: Error finding container 00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed: Status 404 returned error can't find the container with id 00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.430232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerStarted","Data":"1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.444678 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerStarted","Data":"9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.446010 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.446046 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.456233 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerStarted","Data":"8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.456286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerStarted","Data":"785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.469580 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.483697 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f6f4-account-create-update-l7nbq" podStartSLOduration=3.483679206 podStartE2EDuration="3.483679206s" podCreationTimestamp="2026-02-16 13:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.435544591 +0000 UTC m=+990.811893332" watchObservedRunningTime="2026-02-16 13:09:23.483679206 +0000 UTC m=+990.860027927" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.502565 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.518128 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podStartSLOduration=13.518105189 podStartE2EDuration="13.518105189s" podCreationTimestamp="2026-02-16 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.497713607 +0000 UTC m=+990.874062328" watchObservedRunningTime="2026-02-16 13:09:23.518105189 +0000 UTC m=+990.894453910" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.531214 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qnt79-config-5jvnm" podStartSLOduration=9.531197041 podStartE2EDuration="9.531197041s" podCreationTimestamp="2026-02-16 13:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.528357292 +0000 UTC m=+990.904706023" watchObservedRunningTime="2026-02-16 13:09:23.531197041 +0000 UTC m=+990.907545762" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.892991 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qnt79" Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.487011 4740 generic.go:334] "Generic (PLEG): container finished" podID="052d2ebf-cf79-4395-b125-d955d8144cef" containerID="8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.487351 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerDied","Data":"8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.489536 4740 generic.go:334] "Generic (PLEG): container finished" podID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerID="f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.489602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerDied","Data":"f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.494682 4740 generic.go:334] "Generic (PLEG): container finished" podID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerID="2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.494751 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerDied","Data":"2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.497461 4740 generic.go:334] "Generic (PLEG): container finished" podID="5296850e-63c0-4801-bff8-bc5213555f58" containerID="297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.497901 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerDied","Data":"297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506348 4740 generic.go:334] "Generic (PLEG): container finished" podID="634925bb-5381-4298-a256-447ef56a2f2a" containerID="a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506444 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerDied","Data":"a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerStarted","Data":"00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.512743 4740 generic.go:334] "Generic (PLEG): container finished" podID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerID="c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.513000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerDied","Data":"c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.513135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerStarted","Data":"a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517469 4740 generic.go:334] "Generic (PLEG): container finished" podID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerID="f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerDied","Data":"f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerStarted","Data":"6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.906942 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.056468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"15147587-626f-4577-b5af-b8f574f60152\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.056916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"15147587-626f-4577-b5af-b8f574f60152\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.057731 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15147587-626f-4577-b5af-b8f574f60152" (UID: "15147587-626f-4577-b5af-b8f574f60152"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.063446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m" (OuterVolumeSpecName: "kube-api-access-97x9m") pod "15147587-626f-4577-b5af-b8f574f60152" (UID: "15147587-626f-4577-b5af-b8f574f60152"). InnerVolumeSpecName "kube-api-access-97x9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.158417 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.158452 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.543439 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.544197 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerDied","Data":"49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e"} Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.544247 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.542113 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.550098 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.587040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerDied","Data":"6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.587387 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.588501 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.593843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerDied","Data":"785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.593892 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.596936 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerDied","Data":"29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.596973 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.597334 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.598540 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerDied","Data":"959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.598564 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerDied","Data":"1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600721 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600770 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.601034 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602165 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerDied","Data":"00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602200 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.607985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerDied","Data":"a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.608016 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.608065 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.614198 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.622062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626895 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"5296850e-63c0-4801-bff8-bc5213555f58\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626921 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626987 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"5296850e-63c0-4801-bff8-bc5213555f58\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.627697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5296850e-63c0-4801-bff8-bc5213555f58" (UID: "5296850e-63c0-4801-bff8-bc5213555f58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.627774 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a14f3fd5-4d53-4336-85b1-7d636060bd0a" (UID: "a14f3fd5-4d53-4336-85b1-7d636060bd0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.633409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8" (OuterVolumeSpecName: "kube-api-access-fkjs8") pod "a14f3fd5-4d53-4336-85b1-7d636060bd0a" (UID: "a14f3fd5-4d53-4336-85b1-7d636060bd0a"). InnerVolumeSpecName "kube-api-access-fkjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.633569 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7" (OuterVolumeSpecName: "kube-api-access-dfcn7") pod "5296850e-63c0-4801-bff8-bc5213555f58" (UID: "5296850e-63c0-4801-bff8-bc5213555f58"). InnerVolumeSpecName "kube-api-access-dfcn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728672 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"685d1543-1ab9-435f-b2c0-2a54c104e86f\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"65301f64-cd42-4faf-b454-a43c7c7096a1\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728867 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"685d1543-1ab9-435f-b2c0-2a54c104e86f\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"634925bb-5381-4298-a256-447ef56a2f2a\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728969 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729016 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "685d1543-1ab9-435f-b2c0-2a54c104e86f" (UID: "685d1543-1ab9-435f-b2c0-2a54c104e86f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729383 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729510 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"65301f64-cd42-4faf-b454-a43c7c7096a1\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729643 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run" (OuterVolumeSpecName: "var-run") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"634925bb-5381-4298-a256-447ef56a2f2a\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729799 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729897 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730072 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730138 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65301f64-cd42-4faf-b454-a43c7c7096a1" (UID: "65301f64-cd42-4faf-b454-a43c7c7096a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730197 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "634925bb-5381-4298-a256-447ef56a2f2a" (UID: "634925bb-5381-4298-a256-447ef56a2f2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aafb0ee-2681-48a9-b1e0-2442d0a16541" (UID: "9aafb0ee-2681-48a9-b1e0-2442d0a16541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730568 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts" (OuterVolumeSpecName: "scripts") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730648 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730672 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730684 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730696 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730708 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730720 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730731 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730741 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730754 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730768 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730779 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730790 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732253 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5" (OuterVolumeSpecName: "kube-api-access-zwvk5") pod "65301f64-cd42-4faf-b454-a43c7c7096a1" (UID: "65301f64-cd42-4faf-b454-a43c7c7096a1"). InnerVolumeSpecName "kube-api-access-zwvk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732303 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c" (OuterVolumeSpecName: "kube-api-access-vxk6c") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "kube-api-access-vxk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732963 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6" (OuterVolumeSpecName: "kube-api-access-pc4k6") pod "9aafb0ee-2681-48a9-b1e0-2442d0a16541" (UID: "9aafb0ee-2681-48a9-b1e0-2442d0a16541"). InnerVolumeSpecName "kube-api-access-pc4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.733370 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns" (OuterVolumeSpecName: "kube-api-access-bgjns") pod "634925bb-5381-4298-a256-447ef56a2f2a" (UID: "634925bb-5381-4298-a256-447ef56a2f2a"). InnerVolumeSpecName "kube-api-access-bgjns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.734717 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf" (OuterVolumeSpecName: "kube-api-access-m4wnf") pod "685d1543-1ab9-435f-b2c0-2a54c104e86f" (UID: "685d1543-1ab9-435f-b2c0-2a54c104e86f"). InnerVolumeSpecName "kube-api-access-m4wnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834191 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834232 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834245 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834257 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834268 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834284 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618371 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerStarted","Data":"032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6"} Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618460 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618373 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618564 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.654600 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qvqg7" podStartSLOduration=4.461117553 podStartE2EDuration="9.654579493s" podCreationTimestamp="2026-02-16 13:09:20 +0000 UTC" firstStartedPulling="2026-02-16 13:09:23.209259811 +0000 UTC m=+990.585608532" lastFinishedPulling="2026-02-16 13:09:28.402721751 +0000 UTC m=+995.779070472" observedRunningTime="2026-02-16 13:09:29.642633657 +0000 UTC m=+997.018982398" watchObservedRunningTime="2026-02-16 13:09:29.654579493 +0000 UTC m=+997.030928214" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.798954 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.827770 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.869841 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870453 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870564 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870745 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870910 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870995 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871083 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871153 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871231 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871306 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871366 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871526 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871593 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871723 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871950 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872037 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872105 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872170 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872225 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872357 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872414 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.877162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.877261 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.879264 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.054026 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.054728 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.056129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.079324 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.195978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.663418 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:30 crc kubenswrapper[4740]: W0216 13:09:30.670570 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc23d24c8_fce8_4b94_8d3a_44fe83eae896.slice/crio-a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e WatchSource:0}: Error finding container a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e: Status 404 returned error can't find the container with id a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.937800 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.997599 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.997842 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" containerID="cri-o://1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" gracePeriod=10 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.087708 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.291530 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" path="/var/lib/kubelet/pods/052d2ebf-cf79-4395-b125-d955d8144cef/volumes" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.513243 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.634963 4740 generic.go:334] "Generic (PLEG): container finished" podID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" exitCode=0 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635058 4740 scope.go:117] "RemoveContainer" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635151 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.638941 4740 generic.go:334] "Generic (PLEG): container finished" podID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerID="4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0" exitCode=0 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.638978 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerDied","Data":"4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.639000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerStarted","Data":"a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.666948 4740 scope.go:117] "RemoveContainer" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677132 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677181 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677223 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677322 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.683933 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45" (OuterVolumeSpecName: "kube-api-access-vsv45") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "kube-api-access-vsv45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690186 4740 scope.go:117] "RemoveContainer" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: E0216 13:09:31.690716 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": container with ID starting with 1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64 not found: ID does not exist" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690759 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} err="failed to get container status \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": rpc error: code = NotFound desc = could not find container \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": container with ID starting with 1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64 not found: ID does not exist" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690783 4740 scope.go:117] "RemoveContainer" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: E0216 13:09:31.691489 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": container with ID starting with 6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692 not found: ID does not exist" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.691517 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692"} err="failed to get container status \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": rpc error: code = NotFound desc = could not find container \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": container with ID starting with 6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692 not found: ID does not exist" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.719861 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.721614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.725187 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config" (OuterVolumeSpecName: "config") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.730631 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779326 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779369 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779381 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779395 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779407 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.982340 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.987517 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.647878 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerID="032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6" exitCode=0 Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.647950 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerDied","Data":"032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6"} Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.976763 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.101955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102012 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102039 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run" (OuterVolumeSpecName: "var-run") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102211 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102733 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102748 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102758 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.103071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.103596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts" (OuterVolumeSpecName: "scripts") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.107853 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq" (OuterVolumeSpecName: "kube-api-access-ljwxq") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "kube-api-access-ljwxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205222 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205276 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205300 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.310103 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" path="/var/lib/kubelet/pods/b414a4c4-7799-4c49-9aa9-5718c2e5855f/volumes" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659824 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerDied","Data":"a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e"} Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659860 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659876 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.974253 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.068101 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.075428 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117388 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117491 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.127000 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg" (OuterVolumeSpecName: "kube-api-access-4gstg") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "kube-api-access-4gstg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.137970 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.167137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data" (OuterVolumeSpecName: "config-data") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219234 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219461 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219573 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670269 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerDied","Data":"5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013"} Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670307 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670356 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919005 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919702 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="init" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919726 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="init" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919758 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919766 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919779 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919789 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919805 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919834 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922749 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922781 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922793 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.923597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.936847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.981057 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.983256 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987178 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987363 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.989556 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.989688 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.996944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.029927 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.029993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030042 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.082723 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.086297 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.090744 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.090951 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.091129 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.091307 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n5hsv" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.102160 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131777 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131802 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132071 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.133209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.133912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.134585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.144144 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.145563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.162882 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.174241 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.176438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.180804 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.182354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.203988 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233258 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233360 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233712 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233870 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234143 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.239217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.244571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.245433 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.248363 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.250295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.257530 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.264354 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.276001 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.277758 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284265 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hg2gh" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284448 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284561 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.344880 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345327 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345491 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345613 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345779 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346318 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.353684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.354139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.355995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.370358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.373352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.373618 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.376440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.380154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.398259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.422023 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.422851 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" path="/var/lib/kubelet/pods/c23d24c8-fce8-4b94-8d3a-44fe83eae896/volumes" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.423228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.425766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.426989 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.427672 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.432405 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.435093 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.436590 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.442607 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.443216 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77m5" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.452789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.454710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.454876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.456884 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.469516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.470127 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.489340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.490496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.496738 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.524938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.539233 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.540289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mtx8t" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544897 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556736 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556828 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.578832 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.599978 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.601379 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610062 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610229 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjvkq" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.621987 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.630667 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.654360 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.656517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.658933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.658984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659030 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659650 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659791 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659845 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.662562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.663239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.665348 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.681241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.683657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.684388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.688363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.706471 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.709335 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.710071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.765056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766939 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766990 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767022 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767050 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767075 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767113 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767317 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767348 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.776950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.778636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.778692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.781107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.782691 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.789499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.790188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.791648 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.796769 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.802195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.802768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.824024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.841249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.871975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.873919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874308 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874983 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.880757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.899489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.966290 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.984615 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.054268 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.190088 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.205162 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93a2e5d2_cd7f_48d9_87ff_89475b5eeee0.slice/crio-c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259 WatchSource:0}: Error finding container c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259: Status 404 returned error can't find the container with id c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.352454 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.356392 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23974e9_800c_4295_8f84_89b4052280cd.slice/crio-b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb WatchSource:0}: Error finding container b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb: Status 404 returned error can't find the container with id b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.362848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.364018 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d3f831f_b6c1_4f65_85cb_e5ce8ffc3f93.slice/crio-4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7 WatchSource:0}: Error finding container 4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7: Status 404 returned error can't find the container with id 4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.548363 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.555518 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.582526 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c41d146_de9f_4d90_bb9e_6c12fc832650.slice/crio-1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d WatchSource:0}: Error finding container 1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d: Status 404 returned error can't find the container with id 1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.584681 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708580 4740 generic.go:334] "Generic (PLEG): container finished" podID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerID="872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325" exitCode=0 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708688 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerDied","Data":"872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708721 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerStarted","Data":"01c87342c96bf6d54c3f1ad0c1b57760034f7e0851745284f9ad2ed4dcf6b3ad"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.711743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"ebb9cfe67450ca7ef04f47283c5bb6f822165573117c5716dd75a543c9096769"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.713451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerStarted","Data":"b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.714646 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.715734 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffb796745-6csq7" event={"ID":"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93","Type":"ContainerStarted","Data":"4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.716702 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerStarted","Data":"1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.735754 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerStarted","Data":"b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.735792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerStarted","Data":"c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.758970 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l5jrj" podStartSLOduration=2.758947352 podStartE2EDuration="2.758947352s" podCreationTimestamp="2026-02-16 13:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:36.752513189 +0000 UTC m=+1004.128861910" watchObservedRunningTime="2026-02-16 13:09:36.758947352 +0000 UTC m=+1004.135296073" Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.844275 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.863880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.890345 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.106712 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196027 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196387 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196458 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196638 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.209098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2" (OuterVolumeSpecName: "kube-api-access-xvrs2") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "kube-api-access-xvrs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.227605 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.247529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config" (OuterVolumeSpecName: "config") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.252850 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.269433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.284841 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302006 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302035 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302044 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302052 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302061 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302070 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.397339 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.471943 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.485313 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:37 crc kubenswrapper[4740]: E0216 13:09:37.485767 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.485788 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.486055 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.487106 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511834 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613052 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613121 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613140 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613727 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.614079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.614938 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.618275 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.629072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.762700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerStarted","Data":"09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.767791 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerStarted","Data":"9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772594 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerID="b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f" exitCode=0 Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772693 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerStarted","Data":"c97c29b3334866f32db650fc18c5be9637e2474fd2a6df30fb7505a093dc5ffb"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.783762 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerDied","Data":"01c87342c96bf6d54c3f1ad0c1b57760034f7e0851745284f9ad2ed4dcf6b3ad"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.783774 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.784113 4740 scope.go:117] "RemoveContainer" containerID="872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.817448 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerStarted","Data":"afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.824821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.926917 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.950034 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.951071 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hclws" podStartSLOduration=2.951049703 podStartE2EDuration="2.951049703s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:37.864706946 +0000 UTC m=+1005.241055677" watchObservedRunningTime="2026-02-16 13:09:37.951049703 +0000 UTC m=+1005.327398434" Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.577271 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.830154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"ef863733ff531229caffac8488a7e74d8977b0c756b3b6496a0497807187e742"} Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.839725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerStarted","Data":"f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21"} Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.865642 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7lg27" podStartSLOduration=4.142222816 podStartE2EDuration="35.865622421s" podCreationTimestamp="2026-02-16 13:09:03 +0000 UTC" firstStartedPulling="2026-02-16 13:09:05.112910171 +0000 UTC m=+972.489258892" lastFinishedPulling="2026-02-16 13:09:36.836309756 +0000 UTC m=+1004.212658497" observedRunningTime="2026-02-16 13:09:38.857682382 +0000 UTC m=+1006.234031113" watchObservedRunningTime="2026-02-16 13:09:38.865622421 +0000 UTC m=+1006.241971142" Feb 16 13:09:39 crc kubenswrapper[4740]: I0216 13:09:39.297502 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" path="/var/lib/kubelet/pods/a4ce9a30-45a5-40c6-a259-00a790928e07/volumes" Feb 16 13:09:39 crc kubenswrapper[4740]: I0216 13:09:39.851219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerStarted","Data":"0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15"} Feb 16 13:09:40 crc kubenswrapper[4740]: I0216 13:09:40.861262 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:40 crc kubenswrapper[4740]: I0216 13:09:40.890487 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" podStartSLOduration=5.890467207 podStartE2EDuration="5.890467207s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:40.882168315 +0000 UTC m=+1008.258517036" watchObservedRunningTime="2026-02-16 13:09:40.890467207 +0000 UTC m=+1008.266815918" Feb 16 13:09:41 crc kubenswrapper[4740]: I0216 13:09:41.883387 4740 generic.go:334] "Generic (PLEG): container finished" podID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerID="b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51" exitCode=0 Feb 16 13:09:41 crc kubenswrapper[4740]: I0216 13:09:41.883462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerDied","Data":"b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51"} Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.145827 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.184848 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.187068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.193397 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.204446 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.252336 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268554 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268962 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269405 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.278586 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.286449 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.292501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389948 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390416 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390887 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391578 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.392448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.392956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.393325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.401053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.403895 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.406672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.409092 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493830 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.494892 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.495245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.496074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.497265 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.498012 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.513063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.516263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.522331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.612310 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:45 crc kubenswrapper[4740]: I0216 13:09:45.986856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.056035 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.056318 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" containerID="cri-o://9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" gracePeriod=10 Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.922475 4740 generic.go:334] "Generic (PLEG): container finished" podID="56781f2b-b49d-4234-981b-a01a10dfab05" containerID="9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" exitCode=0 Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.922564 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597"} Feb 16 13:09:50 crc kubenswrapper[4740]: I0216 13:09:50.937083 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 16 13:09:51 crc kubenswrapper[4740]: I0216 13:09:51.982878 4740 generic.go:334] "Generic (PLEG): container finished" podID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerID="f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21" exitCode=0 Feb 16 13:09:51 crc kubenswrapper[4740]: I0216 13:09:51.982945 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerDied","Data":"f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21"} Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.294641 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.294919 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbfh699h67dh578h87h7h565hcfhd9h577hchbfh55ch8dh5cfh686h567h7fh698h564h58fh544h576hc5hd7h688h5f6h594h5d5h57bh589h5cdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96l6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-ffb796745-6csq7_openstack(1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.297299 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-ffb796745-6csq7" podUID="1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.392826 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535377 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535459 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535682 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.541302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf" (OuterVolumeSpecName: "kube-api-access-z7xlf") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "kube-api-access-z7xlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.542044 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts" (OuterVolumeSpecName: "scripts") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.542462 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.544038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.564795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data" (OuterVolumeSpecName: "config-data") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.568168 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638692 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638730 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638746 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638759 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638770 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638781 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerDied","Data":"c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259"} Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992976 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992995 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.466989 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.474196 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.583671 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:09:53 crc kubenswrapper[4740]: E0216 13:09:53.584408 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.584439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.584734 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.585641 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.589899 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590156 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590785 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.610014 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.657336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658079 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658481 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.659262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.659566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.768677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.769684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.771261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.773831 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.778163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.781427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.910703 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:55 crc kubenswrapper[4740]: I0216 13:09:55.293711 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" path="/var/lib/kubelet/pods/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0/volumes" Feb 16 13:09:55 crc kubenswrapper[4740]: I0216 13:09:55.937638 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 16 13:09:57 crc kubenswrapper[4740]: I0216 13:09:57.048616 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerID="afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf" exitCode=0 Feb 16 13:09:57 crc kubenswrapper[4740]: I0216 13:09:57.048713 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerDied","Data":"afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf"} Feb 16 13:10:00 crc kubenswrapper[4740]: E0216 13:10:00.128269 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 13:10:00 crc kubenswrapper[4740]: E0216 13:10:00.128683 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n557h55fh97h596h5b4hfh6dhb6h57dh555h9ch598h568h555h5d9h5f9h8dh59h5c9h566h5c4h5dfh5dbh66fh96h668h67ch9bh57ch5c6h68fh546q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ks2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e23974e9-800c-4295-8f84-89b4052280cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.217414 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.218958 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.227155 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292442 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292641 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292686 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292728 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.293361 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts" (OuterVolumeSpecName: "scripts") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.294447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data" (OuterVolumeSpecName: "config-data") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.297220 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs" (OuterVolumeSpecName: "logs") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m" (OuterVolumeSpecName: "kube-api-access-gkb5m") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "kube-api-access-gkb5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298240 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298364 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b" (OuterVolumeSpecName: "kube-api-access-9dr7b") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "kube-api-access-9dr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.299782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h" (OuterVolumeSpecName: "kube-api-access-96l6h") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "kube-api-access-96l6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.301658 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.318471 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.323461 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config" (OuterVolumeSpecName: "config") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.335698 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.355966 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data" (OuterVolumeSpecName: "config-data") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394685 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394725 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394737 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394747 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394755 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394764 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394775 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394785 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394794 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394836 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394847 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394854 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083025 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerDied","Data":"1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083288 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083047 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.084078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffb796745-6csq7" event={"ID":"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93","Type":"ContainerDied","Data":"4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.084199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerDied","Data":"b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089085 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089097 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.151042 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.160597 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.249411 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.249605 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4fvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2hxgr_openstack(6e6806e6-e7ab-40bb-a703-0f4bfe131539): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.251570 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2hxgr" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.299052 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" path="/var/lib/kubelet/pods/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93/volumes" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.352040 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518724 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518929 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518961 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.519009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.519027 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.520933 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521359 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521372 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521385 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="init" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521391 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="init" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521404 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521418 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521432 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521438 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521603 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521625 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521634 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.522564 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.566960 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn" (OuterVolumeSpecName: "kube-api-access-724rn") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "kube-api-access-724rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.579274 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.623345 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.639620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.640077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654219 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654383 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654737 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.695091 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.697291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711151 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hg2gh" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711386 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711524 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.713559 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.717956 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758872 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.759510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.759929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.760172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.760680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.761230 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.851828 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883937 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.884035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.884161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.930097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.968184 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.976469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.977624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config" (OuterVolumeSpecName: "config") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.985891 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986164 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.003179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.047734 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.050749 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.050768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.060909 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.065666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.066544 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.070331 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090253 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090442 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090543 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090567 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.093709 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.098265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.103531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105649 4740 scope.go:117] "RemoveContainer" containerID="9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105842 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.113086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.129400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerStarted","Data":"1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.156680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"06bf25d33138128d15c50d580ea5273787a0565c881e9530d7786cb52837cf0e"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.165393 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dlcqm" podStartSLOduration=2.6279626069999997 podStartE2EDuration="27.165371544s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.652689619 +0000 UTC m=+1004.029038340" lastFinishedPulling="2026-02-16 13:10:01.190098556 +0000 UTC m=+1028.566447277" observedRunningTime="2026-02-16 13:10:02.160761189 +0000 UTC m=+1029.537109910" watchObservedRunningTime="2026-02-16 13:10:02.165371544 +0000 UTC m=+1029.541720265" Feb 16 13:10:02 crc kubenswrapper[4740]: E0216 13:10:02.167187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2hxgr" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.171490 4740 scope.go:117] "RemoveContainer" containerID="96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.174397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.192264 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194087 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194408 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194426 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.196407 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.197668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.204055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.204709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.205171 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.230481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.319082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.431570 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.483413 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.488691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.508530 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.783079 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.970287 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.972217 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978150 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978362 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nblft" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978622 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.986002 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.122092 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.129569 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135422 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135525 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.136185 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.178217 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.193036 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.198877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.198925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.199038 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58778bbcc-2dwkc" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" containerID="cri-o://217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.199634 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58778bbcc-2dwkc" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" containerID="cri-o://126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.212472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerStarted","Data":"8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.219852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.219936 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.220076 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-dfc4b7997-nx6ww" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" containerID="cri-o://99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.220335 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-dfc4b7997-nx6ww" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" containerID="cri-o://6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.234022 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.234068 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.236922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.239248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerStarted","Data":"dd64b688e1590f3434f39d4272a1ad6a38ca58e66991330aef14f81fd719fdd1"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240424 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240705 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240765 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240898 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.242342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.242706 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.248780 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.253914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.254836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"09cd750e98b519f01b27e16cb426575348a5747cf9fd7e48e3599460511afb8a"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.254884 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"c5674c9e734a1860b42445df73308b8cae2af736059f1bf52d50e16bca7865c2"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.255150 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.257445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.257529 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.263959 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerStarted","Data":"a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.264012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerStarted","Data":"ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.274521 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58778bbcc-2dwkc" podStartSLOduration=3.601875096 podStartE2EDuration="26.274499695s" podCreationTimestamp="2026-02-16 13:09:37 +0000 UTC" firstStartedPulling="2026-02-16 13:09:38.60051582 +0000 UTC m=+1005.976864541" lastFinishedPulling="2026-02-16 13:10:01.273140409 +0000 UTC m=+1028.649489140" observedRunningTime="2026-02-16 13:10:03.224552123 +0000 UTC m=+1030.600900844" watchObservedRunningTime="2026-02-16 13:10:03.274499695 +0000 UTC m=+1030.650848416" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.297220 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-dfc4b7997-nx6ww" podStartSLOduration=4.767074478 podStartE2EDuration="28.297198219s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.582869752 +0000 UTC m=+1003.959218473" lastFinishedPulling="2026-02-16 13:10:00.112993493 +0000 UTC m=+1027.489342214" observedRunningTime="2026-02-16 13:10:03.248510907 +0000 UTC m=+1030.624859638" watchObservedRunningTime="2026-02-16 13:10:03.297198219 +0000 UTC m=+1030.673546940" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.300658 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.310619 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d9rnm" podStartSLOduration=5.055232885 podStartE2EDuration="28.310602141s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.848945494 +0000 UTC m=+1004.225294215" lastFinishedPulling="2026-02-16 13:10:00.10431475 +0000 UTC m=+1027.480663471" observedRunningTime="2026-02-16 13:10:03.269102435 +0000 UTC m=+1030.645451156" watchObservedRunningTime="2026-02-16 13:10:03.310602141 +0000 UTC m=+1030.686950862" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.319585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.321096 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lxgpl" podStartSLOduration=10.321008688 podStartE2EDuration="10.321008688s" podCreationTimestamp="2026-02-16 13:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:03.295376061 +0000 UTC m=+1030.671724812" watchObservedRunningTime="2026-02-16 13:10:03.321008688 +0000 UTC m=+1030.697357409" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.330507 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5476559f6b-jvkbv" podStartSLOduration=19.330489287 podStartE2EDuration="19.330489287s" podCreationTimestamp="2026-02-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:03.32485955 +0000 UTC m=+1030.701208281" watchObservedRunningTime="2026-02-16 13:10:03.330489287 +0000 UTC m=+1030.706838008" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342470 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342551 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342588 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342764 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.343592 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.344563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.345365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.356216 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.356505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.364206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.378961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.388000 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" path="/var/lib/kubelet/pods/56781f2b-b49d-4234-981b-a01a10dfab05/volumes" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.418725 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.501343 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.758249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.271602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerStarted","Data":"c58ac0d9aec7a38c3ff27fbc2a9071447407d2f7a15c831455eefd159f4d45ac"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.274895 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"17ea669b2eb8ab74dc8b09119345d0a52be439c8e372e208e8672fcab2a13e40"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.276089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"1506e84718fd32b51421d0c20379f7ab72db7c5398d1bae1271890a3cd491379"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.277387 4740 generic.go:334] "Generic (PLEG): container finished" podID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerID="68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd" exitCode=0 Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.277483 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerDied","Data":"68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.302909 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56b9fd8c4d-crftf" podStartSLOduration=20.302883854 podStartE2EDuration="20.302883854s" podCreationTimestamp="2026-02-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:04.296305548 +0000 UTC m=+1031.672654269" watchObservedRunningTime="2026-02-16 13:10:04.302883854 +0000 UTC m=+1031.679232575" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.522983 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.523051 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:04.614482 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:04.614791 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.713787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825729 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825870 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.828082 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.839379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd" (OuterVolumeSpecName: "kube-api-access-dj5qd") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "kube-api-access-dj5qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.869539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.904433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config" (OuterVolumeSpecName: "config") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.910333 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.911648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928409 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928431 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928442 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928450 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928459 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.937283 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.961200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.029773 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.296367 4740 generic.go:334] "Generic (PLEG): container finished" podID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerID="e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03" exitCode=0 Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.296414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerDied","Data":"dd64b688e1590f3434f39d4272a1ad6a38ca58e66991330aef14f81fd719fdd1"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298218 4740 scope.go:117] "RemoveContainer" containerID="68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.313748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.313797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.314127 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.321824 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.329109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.424326 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.452887 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.468516 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.478773 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84f8c7948d-wxf52" podStartSLOduration=5.478753041 podStartE2EDuration="5.478753041s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:06.453799696 +0000 UTC m=+1033.830148417" watchObservedRunningTime="2026-02-16 13:10:06.478753041 +0000 UTC m=+1033.855101762" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.953210 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: W0216 13:10:06.956131 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd2ae56_b6b0_4b08_8dba_62e7b9f816e6.slice/crio-99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d WatchSource:0}: Error finding container 99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d: Status 404 returned error can't find the container with id 99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.354320 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" path="/var/lib/kubelet/pods/9d3f4b10-353c-4963-96e9-c5e178df6c03/volumes" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.374149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerStarted","Data":"d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c"} Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.374227 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.419423 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podStartSLOduration=6.41940225 podStartE2EDuration="6.41940225s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:07.404392748 +0000 UTC m=+1034.780741469" watchObservedRunningTime="2026-02-16 13:10:07.41940225 +0000 UTC m=+1034.795750971" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.456117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d"} Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.825560 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.056869 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.069798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:08 crc kubenswrapper[4740]: E0216 13:10:08.070588 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.070614 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.070984 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.072165 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.077961 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.078322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.085421 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:08 crc kubenswrapper[4740]: W0216 13:10:08.107052 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44bcd77c_cccb_42d5_9cff_81c0c63bd919.slice/crio-fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913 WatchSource:0}: Error finding container fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913: Status 404 returned error can't find the container with id fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913 Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209674 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209756 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209893 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312697 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.317828 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.321564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.322387 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.323285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.327299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.337365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.339508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.462027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.471288 4740 generic.go:334] "Generic (PLEG): container finished" podID="c1263236-13e5-4a79-b19a-96f535ae0783" containerID="a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80" exitCode=0 Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.471367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerDied","Data":"a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80"} Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.480996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913"} Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.490242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.257150 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:09 crc kubenswrapper[4740]: W0216 13:10:09.272869 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd15191d_cc73_4274_b185_d3572e5deac0.slice/crio-09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971 WatchSource:0}: Error finding container 09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971: Status 404 returned error can't find the container with id 09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.527631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.531214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.541058 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerID="8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e" exitCode=0 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.541101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerDied","Data":"8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.543490 4740 generic.go:334] "Generic (PLEG): container finished" podID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerID="1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5" exitCode=0 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.543549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerDied","Data":"1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547527 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" containerID="cri-o://3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" gracePeriod=30 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547974 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" containerID="cri-o://d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" gracePeriod=30 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.617057 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.616924331 podStartE2EDuration="8.616924331s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:09.615070773 +0000 UTC m=+1036.991419494" watchObservedRunningTime="2026-02-16 13:10:09.616924331 +0000 UTC m=+1036.993273052" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.381167 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414153 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414342 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414392 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.424136 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k" (OuterVolumeSpecName: "kube-api-access-qq56k") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "kube-api-access-qq56k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.430321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts" (OuterVolumeSpecName: "scripts") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.456989 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.479226 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517924 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517978 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517990 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.567866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573056 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerDied","Data":"ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573356 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573415 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576599 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerID="d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" exitCode=0 Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576634 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerID="3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" exitCode=143 Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576917 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576927 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.598962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data" (OuterVolumeSpecName: "config-data") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.619361 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.619389 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.630185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.689659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690299 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690320 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690336 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690344 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690380 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690388 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690598 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690611 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690629 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.691368 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.694142 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.694834 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.736963 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822546 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822858 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822991 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823386 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.825217 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.832969 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs" (OuterVolumeSpecName: "logs") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.836897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.837112 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts" (OuterVolumeSpecName: "scripts") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.872339 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd" (OuterVolumeSpecName: "kube-api-access-9j2fd") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "kube-api-access-9j2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.878165 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929462 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929497 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929530 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929590 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929879 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929923 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929934 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929957 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929967 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.935146 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.942209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.953357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.958398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.961650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.966004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.967029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.973191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.973369 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data" (OuterVolumeSpecName: "config-data") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.975342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:10.997681 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.037824 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.037859 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.047215 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.234761 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240406 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240866 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.241048 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.242742 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs" (OuterVolumeSpecName: "logs") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.245314 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.250863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp" (OuterVolumeSpecName: "kube-api-access-cv6qp") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "kube-api-access-cv6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.250914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts" (OuterVolumeSpecName: "scripts") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.268329 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.313434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data" (OuterVolumeSpecName: "config-data") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347428 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347455 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347465 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347474 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347482 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.383111 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.392159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.394694 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2" (OuterVolumeSpecName: "kube-api-access-jd2x2") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "kube-api-access-jd2x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449727 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449772 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449784 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.627886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.628086 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" containerID="cri-o://cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" gracePeriod=30 Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.628599 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" containerID="cri-o://82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" gracePeriod=30 Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerDied","Data":"9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634390 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634444 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638501 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerDied","Data":"b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638555 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.649901 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.653639 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.654030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.693714 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.693695173 podStartE2EDuration="9.693695173s" podCreationTimestamp="2026-02-16 13:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:11.669110258 +0000 UTC m=+1039.045458979" watchObservedRunningTime="2026-02-16 13:10:11.693695173 +0000 UTC m=+1039.070043894" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.712239 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.749797 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.791074 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.797592 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: E0216 13:10:11.798231 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798254 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: E0216 13:10:11.798274 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798280 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798923 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798948 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.802638 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.808282 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.809632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.809793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.820972 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c64d8f89f-pfmqj" podStartSLOduration=3.820954137 podStartE2EDuration="3.820954137s" podCreationTimestamp="2026-02-16 13:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:11.743451233 +0000 UTC m=+1039.119799964" watchObservedRunningTime="2026-02-16 13:10:11.820954137 +0000 UTC m=+1039.197302858" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.857635 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.905625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.910528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.914920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.914999 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915580 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.947269 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.951149 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.968451 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.968706 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77m5" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.977332 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.983580 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017834 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.018410 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.027358 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.028214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.033004 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.039349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.046921 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.047522 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.048428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.103859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.108124 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.114200 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119188 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120277 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120382 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120446 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120639 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.155632 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.177932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.181216 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.184875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188303 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188494 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188693 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188806 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.189093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjvkq" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.190154 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222071 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222318 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222429 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222734 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223289 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223838 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.226207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.227970 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.232444 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.238229 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.240279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.244687 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259494 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.274021 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.274290 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" containerID="cri-o://d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" gracePeriod=10 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.278001 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.305469 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.307043 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324761 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.325047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.339733 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.349644 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.351960 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.355716 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.378745 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.383466 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.414613 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426380 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426476 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.427589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.435692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.438273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.439542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.440923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.454681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.457224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.489678 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.503468 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528321 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528357 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528390 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528604 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.530898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.531708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.533308 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.533993 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.535891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.536436 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.545623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.547470 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.552150 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.556523 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.566626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.675620 4740 generic.go:334] "Generic (PLEG): container finished" podID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerID="d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" exitCode=0 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.675683 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.677137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cc7d69b6f-dmv77" event={"ID":"e68475b5-404f-48fc-a05a-ea18135e837c","Type":"ContainerStarted","Data":"50ee66e2092c893cf5f0f856cd7d76e6d102c3e7ac4bf3cc2a539e3dd9eb076b"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.677157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cc7d69b6f-dmv77" event={"ID":"e68475b5-404f-48fc-a05a-ea18135e837c","Type":"ContainerStarted","Data":"d6113a5f2c1f4af5d6b255160f0631ac5bd8d5b93cb3bf016b4358612678d41a"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.678281 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.691589 4740 generic.go:334] "Generic (PLEG): container finished" podID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerID="82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" exitCode=0 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.691622 4740 generic.go:334] "Generic (PLEG): container finished" podID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerID="cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" exitCode=143 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.692383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.692442 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.825427 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.838463 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.949798 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5cc7d69b6f-dmv77" podStartSLOduration=2.9497712849999997 podStartE2EDuration="2.949771285s" podCreationTimestamp="2026-02-16 13:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:12.700311308 +0000 UTC m=+1040.076660029" watchObservedRunningTime="2026-02-16 13:10:12.949771285 +0000 UTC m=+1040.326120006" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.973441 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.087901 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.092247 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202073 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202624 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.203004 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.231831 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq" (OuterVolumeSpecName: "kube-api-access-gpscq") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "kube-api-access-gpscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.344682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.348456 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.398733 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.398285 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.421720 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" path="/var/lib/kubelet/pods/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6/volumes" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.443043 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445787 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445823 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445833 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.468633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config" (OuterVolumeSpecName: "config") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.489728 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546527 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546672 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546758 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546920 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546965 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547479 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547499 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547945 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs" (OuterVolumeSpecName: "logs") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.548192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.578017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts" (OuterVolumeSpecName: "scripts") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.611018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.628027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv" (OuterVolumeSpecName: "kube-api-access-7z8mv") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "kube-api-access-7z8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650669 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650697 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650708 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650717 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650746 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.692999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.753714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data" (OuterVolumeSpecName: "config-data") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.756208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.762114 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.793793 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 16 13:10:13 crc kubenswrapper[4740]: W0216 13:10:13.804174 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod983c874c_3b25_49df_82cb_b3dfaf1db7ac.slice/crio-4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e WatchSource:0}: Error finding container 4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e: Status 404 returned error can't find the container with id 4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.823951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"05535fb9b406eba46f389f3cb501f252adf4d6c36395ee05c09e7c3dc0a2cc74"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.841587 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"7fc52501a0de9e1866cab48c6e4c1a1ccfd8612bb6a0ab877e598a0179812bdf"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.853661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"c58ac0d9aec7a38c3ff27fbc2a9071447407d2f7a15c831455eefd159f4d45ac"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.854206 4740 scope.go:117] "RemoveContainer" containerID="d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.854512 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857727 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857765 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857777 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.957648 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.976614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"5b0e438309976f20e6cf23ad9e1052b831e4bb9fad154e2636f7ef4afee681a4"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.999064 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.056021 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.057965 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913"} Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.085454 4740 scope.go:117] "RemoveContainer" containerID="e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.155509 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.196892 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.224338 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.236002 4740 scope.go:117] "RemoveContainer" containerID="82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.240474 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.240937 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="init" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.240964 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="init" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.240994 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241004 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.241024 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241032 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.241046 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241052 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241244 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241282 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241304 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.243766 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.253474 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.257797 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.296261 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: W0216 13:10:14.314115 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fd3ee_2093_4adb_a7af_23d05c718429.slice/crio-ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a WatchSource:0}: Error finding container ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a: Status 404 returned error can't find the container with id ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.320323 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.381859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.381956 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382150 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382311 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.398962 4740 scope.go:117] "RemoveContainer" containerID="cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483945 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483966 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.484042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.484410 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.491779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.492264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.501583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.503842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.503853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.505292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.524457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.527200 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.528843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.626974 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b9fd8c4d-crftf" podUID="add1eb0e-dbfc-463a-b676-3e2e2b1f478d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.667616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.092626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"8dca7c5d645d9c1b3cb27e141991a881de7063a8cfe75c04fc3f91921ff7f1b9"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"90595916ba48eea6e67347b7e520ad15b0e0e798381a72b531b7c7fe3509ca4a"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105779 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105841 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.136114 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-758758df44-4g6db" podStartSLOduration=4.136094622 podStartE2EDuration="4.136094622s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:15.130796145 +0000 UTC m=+1042.507144866" watchObservedRunningTime="2026-02-16 13:10:15.136094622 +0000 UTC m=+1042.512443343" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.168329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.168379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"302a4fe2d789df7c6696f0d0599ddcf1f3c215a4ff9751a1692adaad334ff853"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176429 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" exitCode=0 Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerStarted","Data":"ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.299614 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" path="/var/lib/kubelet/pods/44bcd77c-cccb-42d5-9cff-81c0c63bd919/volumes" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.301712 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" path="/var/lib/kubelet/pods/e96a5e58-8096-4550-8a98-f47ad00622f8/volumes" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.460659 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.861695 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.871603 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.874764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.875279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.903648 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935851 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935992 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936100 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.039573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.053174 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.058538 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.060552 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.061709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.066156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.068639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.196536 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0"} Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213865 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213974 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.250337 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58b5794cfd-4trjb" podStartSLOduration=4.25031117 podStartE2EDuration="4.25031117s" podCreationTimestamp="2026-02-16 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:16.239570011 +0000 UTC m=+1043.615918732" watchObservedRunningTime="2026-02-16 13:10:16.25031117 +0000 UTC m=+1043.626659891" Feb 16 13:10:19 crc kubenswrapper[4740]: I0216 13:10:19.249402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} Feb 16 13:10:19 crc kubenswrapper[4740]: I0216 13:10:19.283952 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.283934497 podStartE2EDuration="8.283934497s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:19.275216771 +0000 UTC m=+1046.651565492" watchObservedRunningTime="2026-02-16 13:10:19.283934497 +0000 UTC m=+1046.660283218" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.186017 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.186479 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.226123 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.229998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280092 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"b1f805b3f42130f9ec256249b24cf294052db79347125cb78ef7bd761396a42c"} Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280265 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280899 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.140659 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.328692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"d1b9cf7963a3448627aa82cb504e23edd58f758f49f5b3149dec36a2e9172b3c"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.368921 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerStarted","Data":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.369243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.382473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.407168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"53e561ac2bf980b00527b2cc4de4e2527253fabffa39158574ca33d75f33b933"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.416301 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"620fecfe559ab2f4b7e5556af99d2ed15a184002045d049dbcc46792310d32af"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.429334 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" podStartSLOduration=11.429317663 podStartE2EDuration="11.429317663s" podCreationTimestamp="2026-02-16 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:23.423341976 +0000 UTC m=+1050.799690697" watchObservedRunningTime="2026-02-16 13:10:23.429317663 +0000 UTC m=+1050.805666384" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.429690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.438725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerStarted","Data":"451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"22edea436a040f2420273a2cfd8a123be2bfeac22a49bc2827fbdb65ebe6bcb5"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447088 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"f0b4ea8abea1e9b7cb7f3c01467bf7d9c9be51328475c7abb750d294d2bee487"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447913 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447944 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.450848 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"a159fd47020e98397ec5d8f3691a344afdc0cf0e030443a1f96e58d082c9f6c2"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.455684 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2hxgr" podStartSLOduration=3.616129973 podStartE2EDuration="49.45566975s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.860492678 +0000 UTC m=+1004.236841399" lastFinishedPulling="2026-02-16 13:10:22.700032455 +0000 UTC m=+1050.076381176" observedRunningTime="2026-02-16 13:10:24.453404549 +0000 UTC m=+1051.829753270" watchObservedRunningTime="2026-02-16 13:10:24.45566975 +0000 UTC m=+1051.832018471" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.473398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"65c226b62f8f7960e09cabedf8ac39b2be809b6226aed6e9b63ac95c8c0b01e7"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.482959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" podStartSLOduration=3.934197555 podStartE2EDuration="13.48294164s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="2026-02-16 13:10:13.081425897 +0000 UTC m=+1040.457774618" lastFinishedPulling="2026-02-16 13:10:22.630169982 +0000 UTC m=+1050.006518703" observedRunningTime="2026-02-16 13:10:24.476165416 +0000 UTC m=+1051.852514137" watchObservedRunningTime="2026-02-16 13:10:24.48294164 +0000 UTC m=+1051.859290361" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.524254 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.538194 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f4698b555-qswqc" podStartSLOduration=4.544684308 podStartE2EDuration="13.538174962s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="2026-02-16 13:10:13.706007474 +0000 UTC m=+1041.082356195" lastFinishedPulling="2026-02-16 13:10:22.699498128 +0000 UTC m=+1050.075846849" observedRunningTime="2026-02-16 13:10:24.529133387 +0000 UTC m=+1051.905482108" watchObservedRunningTime="2026-02-16 13:10:24.538174962 +0000 UTC m=+1051.914523683" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.541501 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fbb5f795d-phd88" podStartSLOduration=9.541487326 podStartE2EDuration="9.541487326s" podCreationTimestamp="2026-02-16 13:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:24.508700913 +0000 UTC m=+1051.885049644" watchObservedRunningTime="2026-02-16 13:10:24.541487326 +0000 UTC m=+1051.917836047" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.613000 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b9fd8c4d-crftf" podUID="add1eb0e-dbfc-463a-b676-3e2e2b1f478d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.496251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.542094 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.542077151 podStartE2EDuration="11.542077151s" podCreationTimestamp="2026-02-16 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:25.532139087 +0000 UTC m=+1052.908487808" watchObservedRunningTime="2026-02-16 13:10:25.542077151 +0000 UTC m=+1052.918425872" Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.872224 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:26 crc kubenswrapper[4740]: I0216 13:10:26.099479 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:26 crc kubenswrapper[4740]: I0216 13:10:26.137500 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:10:27 crc kubenswrapper[4740]: I0216 13:10:27.371901 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:10:29 crc kubenswrapper[4740]: I0216 13:10:29.547494 4740 generic.go:334] "Generic (PLEG): container finished" podID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerID="451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b" exitCode=0 Feb 16 13:10:29 crc kubenswrapper[4740]: I0216 13:10:29.547580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerDied","Data":"451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b"} Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.927502 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989367 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.990719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.010502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz" (OuterVolumeSpecName: "kube-api-access-c4fvz") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "kube-api-access-c4fvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.010570 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts" (OuterVolumeSpecName: "scripts") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.012962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.021733 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.043972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data" (OuterVolumeSpecName: "config-data") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091126 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091169 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091183 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091192 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091201 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091209 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.196269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.437455 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.437754 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" containerID="cri-o://c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" gracePeriod=30 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.438319 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" containerID="cri-o://b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" gracePeriod=30 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.461607 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": EOF" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.478622 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:32 crc kubenswrapper[4740]: E0216 13:10:32.483022 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.483059 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.483295 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.484569 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.489277 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.580901 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerDied","Data":"09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6"} Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.580954 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.581545 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.605772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.605957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606152 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606576 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.607123 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.710847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.710952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711229 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.716190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.716902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.721360 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.732961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.802545 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.828042 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.847168 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.908283 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.908544 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" containerID="cri-o://0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" gracePeriod=10 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.974727 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.044517 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.045322 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58b5794cfd-4trjb" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" containerID="cri-o://6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" gracePeriod=30 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.044984 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58b5794cfd-4trjb" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" containerID="cri-o://fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" gracePeriod=30 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.250889 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.252835 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.254487 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262048 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262096 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mtx8t" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262559 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262726 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323489 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.413756 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.418154 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.418285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.426910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.426969 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.430227 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.436129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.437884 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.451409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.463485 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.465102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.491861 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.493311 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.497655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.501866 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529344 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529594 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529765 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529852 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529995 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.530059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.545418 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.620988 4740 generic.go:334] "Generic (PLEG): container finished" podID="40882b0a-c73f-4936-83f9-8bef1774c356" containerID="fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" exitCode=143 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.621174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.625925 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerID="0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" exitCode=0 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.626005 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631378 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerID="6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631404 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerID="99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631574 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631613 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631648 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631757 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631779 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631802 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.632267 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631480 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.633278 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.633677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635238 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635675 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.636068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.642680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.642850 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643289 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643329 4740 generic.go:334] "Generic (PLEG): container finished" podID="fd15191d-cc73-4274-b185-d3572e5deac0" containerID="b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" exitCode=0 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.644630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.648963 4740 generic.go:334] "Generic (PLEG): container finished" podID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerID="126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.648987 4740 generic.go:334] "Generic (PLEG): container finished" podID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerID="217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.649175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.649198 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.653410 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.653919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.863791 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.881223 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.671280 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.671793 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.691650 4740 generic.go:334] "Generic (PLEG): container finished" podID="fd15191d-cc73-4274-b185-d3572e5deac0" containerID="c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" exitCode=0 Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.691690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a"} Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.749414 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.784130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.894157 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:34 crc kubenswrapper[4740]: E0216 13:10:34.952470 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976431 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976855 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.977045 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.977200 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.002858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs" (OuterVolumeSpecName: "logs") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.009206 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.018489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj" (OuterVolumeSpecName: "kube-api-access-cccxj") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "kube-api-access-cccxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.018721 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data" (OuterVolumeSpecName: "config-data") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.073122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts" (OuterVolumeSpecName: "scripts") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.075744 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.079775 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081553 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081594 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081602 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081610 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182523 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182634 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182654 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182927 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.186748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs" (OuterVolumeSpecName: "logs") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.193967 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc" (OuterVolumeSpecName: "kube-api-access-qcwhc") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "kube-api-access-qcwhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.194036 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.202079 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.202620 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f" (OuterVolumeSpecName: "kube-api-access-5k97f") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "kube-api-access-5k97f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.239896 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data" (OuterVolumeSpecName: "config-data") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.258534 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.278353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config" (OuterVolumeSpecName: "config") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.283671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts" (OuterVolumeSpecName: "scripts") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285346 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285382 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285392 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285400 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285416 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285426 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285435 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285442 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.303170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.315785 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.324177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387051 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387080 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387091 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.396974 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.433599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.488271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.488336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489525 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489659 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489714 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.496556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.507203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx" (OuterVolumeSpecName: "kube-api-access-q69zx") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "kube-api-access-q69zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.550247 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config" (OuterVolumeSpecName: "config") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.558499 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.567290 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600169 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600205 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600217 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.607661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.642010 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.648517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.650554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.668627 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701901 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701935 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701946 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701957 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703340 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"ebb9cfe67450ca7ef04f47283c5bb6f822165573117c5716dd75a543c9096769"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703386 4740 scope.go:117] "RemoveContainer" containerID="6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703489 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721785 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721790 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" containerID="cri-o://1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721954 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" containerID="cri-o://d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721960 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" containerID="cri-o://87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.724859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"a30f8e0a38b39c3a533a61933daccfe7c9ac4a55dac7d23ebbd3bc31afe612c4"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.728003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"aac1339f667ba287f535c7aecd906d0a53a56e248b1a5a8045462961130dcc2b"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.737139 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.737159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"c97c29b3334866f32db650fc18c5be9637e2474fd2a6df30fb7505a093dc5ffb"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.738870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"743e32c7253a8cc36fbf948e38784e79b55632de84992eaace05d9cb1e718286"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748109 4740 generic.go:334] "Generic (PLEG): container finished" podID="fecd834c-f149-401b-9c43-810e215a68ed" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" exitCode=0 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerStarted","Data":"07d715fc080ad12a61c95f40dabacc8440c0cb90c7a33ab67e7813105918c946"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.753960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.754199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.761496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"ef863733ff531229caffac8488a7e74d8977b0c756b3b6496a0497807187e742"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.761576 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.762698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.762722 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.810707 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.854895 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.880579 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.888097 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.921507 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.930257 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.937898 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.946298 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.980541 4740 scope.go:117] "RemoveContainer" containerID="99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.012169 4740 scope.go:117] "RemoveContainer" containerID="0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.045799 4740 scope.go:117] "RemoveContainer" containerID="b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.108505 4740 scope.go:117] "RemoveContainer" containerID="b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.393994 4740 scope.go:117] "RemoveContainer" containerID="c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.649825 4740 scope.go:117] "RemoveContainer" containerID="126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.775170 4740 generic.go:334] "Generic (PLEG): container finished" podID="40882b0a-c73f-4936-83f9-8bef1774c356" containerID="6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" exitCode=0 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.775238 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784296 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"fc964d0b49183b12cb5186f46589a16fcb1022c5b54450dc53f1b5f39c5c49ee"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"4c51adb2e080a37af373d8805aa1044901dd6491b487202c4d5d418cae7c73d7"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784634 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827694 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" exitCode=0 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827727 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" exitCode=2 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.858242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.881538 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d8c67b945-9qhdf" podStartSLOduration=4.881506406 podStartE2EDuration="4.881506406s" podCreationTimestamp="2026-02-16 13:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:36.88064677 +0000 UTC m=+1064.256995491" watchObservedRunningTime="2026-02-16 13:10:36.881506406 +0000 UTC m=+1064.257855127" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.945986 4740 scope.go:117] "RemoveContainer" containerID="217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.088010 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254004 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254345 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254430 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254524 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.255201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs" (OuterVolumeSpecName: "logs") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.261539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.261600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr" (OuterVolumeSpecName: "kube-api-access-2bdlr") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "kube-api-access-2bdlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.287854 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.299022 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" path="/var/lib/kubelet/pods/29b02cf4-52ac-4f68-a0de-83f62949ce16/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.299946 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" path="/var/lib/kubelet/pods/3cd0546c-4e67-40e3-93c1-1aee20e6df48/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.300887 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" path="/var/lib/kubelet/pods/fbc73a16-685a-4912-bec0-407ef2c7d3e9/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.302215 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" path="/var/lib/kubelet/pods/fd15191d-cc73-4274-b185-d3572e5deac0/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.323318 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data" (OuterVolumeSpecName: "config-data") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357139 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357171 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357181 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357192 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357201 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.400894 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.400942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.867918 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" containerID="cri-o://3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" gracePeriod=30 Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.868781 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.869713 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.868893 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" containerID="cri-o://aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" gracePeriod=30 Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.876958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"302a4fe2d789df7c6696f0d0599ddcf1f3c215a4ff9751a1692adaad334ff853"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.877011 4740 scope.go:117] "RemoveContainer" containerID="6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.877132 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.887345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerStarted","Data":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.888547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.890058 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.890046321 podStartE2EDuration="4.890046321s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:37.883247546 +0000 UTC m=+1065.259596267" watchObservedRunningTime="2026-02-16 13:10:37.890046321 +0000 UTC m=+1065.266395042" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.906007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.923521 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" podStartSLOduration=4.923503926 podStartE2EDuration="4.923503926s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:37.916865706 +0000 UTC m=+1065.293214427" watchObservedRunningTime="2026-02-16 13:10:37.923503926 +0000 UTC m=+1065.299852647" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.932621 4740 scope.go:117] "RemoveContainer" containerID="fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.958906 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.976924 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:38 crc kubenswrapper[4740]: E0216 13:10:38.319456 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37375845_ac13_48bc_a134_c8fdc01e4242.slice/crio-aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37375845_ac13_48bc_a134_c8fdc01e4242.slice/crio-conmon-aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.377695 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.377789 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.382025 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.655257 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792654 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792821 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792892 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs" (OuterVolumeSpecName: "logs") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793185 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793959 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793985 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.799972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts" (OuterVolumeSpecName: "scripts") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.800062 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx" (OuterVolumeSpecName: "kube-api-access-vqbvx") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "kube-api-access-vqbvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.808064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.831967 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899187 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899238 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899249 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899257 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.911200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data" (OuterVolumeSpecName: "config-data") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943399 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943538 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948427 4740 generic.go:334] "Generic (PLEG): container finished" podID="37375845-ac13-48bc-a134-c8fdc01e4242" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" exitCode=0 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948469 4740 generic.go:334] "Generic (PLEG): container finished" podID="37375845-ac13-48bc-a134-c8fdc01e4242" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" exitCode=143 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948557 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"aac1339f667ba287f535c7aecd906d0a53a56e248b1a5a8045462961130dcc2b"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.976941 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" exitCode=0 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.977006 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.977059 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.989902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd"} Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.018507 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.035575 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.042920 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.049591 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050029 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050045 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050086 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050094 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050107 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050114 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050134 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050143 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050162 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050169 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050182 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050191 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050204 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050212 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050227 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050235 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050255 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050263 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050273 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="init" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050279 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="init" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050290 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050298 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050538 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050558 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050574 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050586 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050598 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050612 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050625 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050639 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050657 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050666 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050675 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.051784 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.052285 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.054961 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055096 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} err="failed to get container status \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055171 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.055695 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055741 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} err="failed to get container status \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055779 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056450 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} err="failed to get container status \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056489 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056708 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} err="failed to get container status \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060266 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060522 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060702 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.062343 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.188106419 podStartE2EDuration="6.062321989s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="2026-02-16 13:10:35.570844223 +0000 UTC m=+1062.947192944" lastFinishedPulling="2026-02-16 13:10:36.445059793 +0000 UTC m=+1063.821408514" observedRunningTime="2026-02-16 13:10:39.030696522 +0000 UTC m=+1066.407045253" watchObservedRunningTime="2026-02-16 13:10:39.062321989 +0000 UTC m=+1066.438670710" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.096586 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.115090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137283 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239894 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239972 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240065 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240096 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240848 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241340 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241455 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241548 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.247117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.247445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m" (OuterVolumeSpecName: "kube-api-access-4ks2m") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "kube-api-access-4ks2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248632 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts" (OuterVolumeSpecName: "scripts") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.249093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.251688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.260052 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.261261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.263179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.263566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.293789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.296067 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" path="/var/lib/kubelet/pods/37375845-ac13-48bc-a134-c8fdc01e4242/volumes" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.297223 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" path="/var/lib/kubelet/pods/40882b0a-c73f-4936-83f9-8bef1774c356/volumes" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.316252 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.342996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data" (OuterVolumeSpecName: "config-data") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.343805 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.343999 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344119 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344235 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344368 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344533 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.408702 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.480370 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.538599 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.618488 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.934540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.001056 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"1bbd648bd07fb0718d2c22db45e34861633f97eab0f6c26f66c44193d285e767"} Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.004986 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb"} Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006080 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" containerID="cri-o://450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" gracePeriod=30 Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006096 4740 scope.go:117] "RemoveContainer" containerID="d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006174 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" containerID="cri-o://e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" gracePeriod=30 Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.046091 4740 scope.go:117] "RemoveContainer" containerID="87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.104089 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.123449 4740 scope.go:117] "RemoveContainer" containerID="1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.123630 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.135723 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136200 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136227 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136259 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136270 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136290 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136298 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136523 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136560 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136577 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.139016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.142075 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.142311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.156550 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368428 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368674 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368712 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.370531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.370592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.375431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.382073 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.384843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.388965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.389893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.472650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.944319 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: W0216 13:10:40.951698 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66639a68_b84e_4e5f_be92_a3a8f9b7a0fc.slice/crio-b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c WatchSource:0}: Error finding container b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c: Status 404 returned error can't find the container with id b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.016325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"d5353b29dea4f1a0aef8ab1e454a5ec26d2e90bcc58d89b9c73b48f13f41d506"} Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.018294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c"} Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.291845 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23974e9-800c-4295-8f84-89b4052280cd" path="/var/lib/kubelet/pods/e23974e9-800c-4295-8f84-89b4052280cd/volumes" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.032539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"f8b741c091a93f5638e3803332bc679afdde3e39b144866db437b22c6c5b3b08"} Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.034906 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.035970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb"} Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.073134 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.073111336 podStartE2EDuration="3.073111336s" podCreationTimestamp="2026-02-16 13:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:42.0599015 +0000 UTC m=+1069.436250231" watchObservedRunningTime="2026-02-16 13:10:42.073111336 +0000 UTC m=+1069.449460057" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.830228 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.054993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17"} Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.547220 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.817195 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.846784 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.866945 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.959956 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.980655 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.981157 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" containerID="cri-o://b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" gracePeriod=10 Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.065380 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" exitCode=0 Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.065445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.067558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf"} Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.135500 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.523526 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.658499 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.660038 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.662405 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.662553 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mhlrx" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.668240 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.676213 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.716544 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.781772 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.781940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782091 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782114 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782195 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.808177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97" (OuterVolumeSpecName: "kube-api-access-x8h97") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "kube-api-access-x8h97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883794 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.884061 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.884873 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.886661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.891569 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.893554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.906449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.909650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.929616 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.952460 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.979201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config" (OuterVolumeSpecName: "config") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992043 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992280 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992369 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992444 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992511 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080004 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" exitCode=0 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080024 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a"} Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080465 4740 scope.go:117] "RemoveContainer" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080061 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080795 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.081324 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" containerID="cri-o://7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" gracePeriod=30 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.081404 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" containerID="cri-o://06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" gracePeriod=30 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.121738 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.129646 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.132854 4740 scope.go:117] "RemoveContainer" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.164104 4740 scope.go:117] "RemoveContainer" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: E0216 13:10:45.173291 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": container with ID starting with b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd not found: ID does not exist" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.173354 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} err="failed to get container status \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": rpc error: code = NotFound desc = could not find container \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": container with ID starting with b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd not found: ID does not exist" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.173386 4740 scope.go:117] "RemoveContainer" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: E0216 13:10:45.180335 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": container with ID starting with 2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7 not found: ID does not exist" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.180395 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7"} err="failed to get container status \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": rpc error: code = NotFound desc = could not find container \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": container with ID starting with 2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7 not found: ID does not exist" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.316842 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" path="/var/lib/kubelet/pods/bc2fd3ee-2093-4adb-a7af-23d05c718429/volumes" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.575162 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.575548 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:10:45 crc kubenswrapper[4740]: W0216 13:10:45.633483 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f78f448_6577_48d1_b077_01e42c14758c.slice/crio-8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa WatchSource:0}: Error finding container 8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa: Status 404 returned error can't find the container with id 8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.635074 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.100430 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f78f448-6577-48d1-b077-01e42c14758c","Type":"ContainerStarted","Data":"8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.105199 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.107529 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.117354 4740 generic.go:334] "Generic (PLEG): container finished" podID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerID="06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" exitCode=0 Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.117424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.143614 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.318937159 podStartE2EDuration="6.143598022s" podCreationTimestamp="2026-02-16 13:10:40 +0000 UTC" firstStartedPulling="2026-02-16 13:10:40.954935244 +0000 UTC m=+1068.331283965" lastFinishedPulling="2026-02-16 13:10:44.779596107 +0000 UTC m=+1072.155944828" observedRunningTime="2026-02-16 13:10:46.139968377 +0000 UTC m=+1073.516317118" watchObservedRunningTime="2026-02-16 13:10:46.143598022 +0000 UTC m=+1073.519946743" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.115927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:49 crc kubenswrapper[4740]: E0216 13:10:49.117277 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="init" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117314 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="init" Feb 16 13:10:49 crc kubenswrapper[4740]: E0216 13:10:49.117362 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117371 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117562 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.118915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.122727 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.122953 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.123060 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168729 4740 generic.go:334] "Generic (PLEG): container finished" podID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerID="7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" exitCode=0 Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168826 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168860 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5"} Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185298 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287826 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287941 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.289803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.295517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.297571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.309847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.312777 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.314264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.315397 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.454452 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.954961 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.123940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.123996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124172 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.125969 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.137958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts" (OuterVolumeSpecName: "scripts") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.139488 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h" (OuterVolumeSpecName: "kube-api-access-wf65h") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "kube-api-access-wf65h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.165545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181253 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"a30f8e0a38b39c3a533a61933daccfe7c9ac4a55dac7d23ebbd3bc31afe612c4"} Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181795 4740 scope.go:117] "RemoveContainer" containerID="06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181960 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.221114 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226350 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226390 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226401 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226410 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226565 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: W0216 13:10:50.254116 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae3001c_021f_4f48_860e_0893978fafaa.slice/crio-3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6 WatchSource:0}: Error finding container 3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6: Status 404 returned error can't find the container with id 3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6 Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.256144 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.271715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data" (OuterVolumeSpecName: "config-data") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.328364 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.391211 4740 scope.go:117] "RemoveContainer" containerID="7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.524138 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.536181 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545187 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: E0216 13:10:50.545579 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545596 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: E0216 13:10:50.545627 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545634 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545794 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545821 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.546736 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.557229 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.566205 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.632993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.735770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.736866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.736934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.738028 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742382 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.743207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.756163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.875913 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.322828 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" path="/var/lib/kubelet/pods/a8fb04c0-6e01-4174-93b2-195dea7f96b6/volumes" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"f09e821fc0a96094228dd18b50d115c68fbabd0942c18bd405fd6bea81a0106b"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"14b07196c52c4c95ce61a63376ccd3d53a29dcf74391576e8a665ef8515c5106"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328355 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.383730 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" podStartSLOduration=2.383710261 podStartE2EDuration="2.383710261s" podCreationTimestamp="2026-02-16 13:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:51.362339848 +0000 UTC m=+1078.738688589" watchObservedRunningTime="2026-02-16 13:10:51.383710261 +0000 UTC m=+1078.760058982" Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.045532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.340288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"9c7afb5b2a8227b3161c2d72f392a8cfea9c6514a9a370d738c7675bf010c2cd"} Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.566798 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567423 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" containerID="cri-o://72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567661 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" containerID="cri-o://dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567730 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" containerID="cri-o://c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567871 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" containerID="cri-o://c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" gracePeriod=30 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.354992 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355341 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" exitCode=2 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355075 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355356 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355439 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb"} Feb 16 13:10:54 crc kubenswrapper[4740]: I0216 13:10:54.524962 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.693172 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.694857 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.731370 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.780367 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.781415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.811118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831264 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831727 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831989 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.886085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.892471 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.907883 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.909501 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.919680 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.924059 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.929934 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.935289 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.935843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.985133 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.987116 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.032941 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036052 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.092264 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.093774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.100614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.102299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.107276 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139157 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139458 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.141195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.185571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.186976 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.225507 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.240091 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.241422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.241507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.242121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.258413 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.299854 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.301586 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.304128 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.322632 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.343147 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.343366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.419005 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.466855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.622980 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:58 crc kubenswrapper[4740]: I0216 13:10:58.948210 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124378 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125283 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.126005 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.134319 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb" (OuterVolumeSpecName: "kube-api-access-ztrfb") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "kube-api-access-ztrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.151883 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts" (OuterVolumeSpecName: "scripts") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.197580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.238992 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239425 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239520 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239550 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239597 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.349968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data" (OuterVolumeSpecName: "config-data") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375869 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375913 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375933 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.376864 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.417030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerStarted","Data":"6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420724 4740 scope.go:117] "RemoveContainer" containerID="dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420859 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.422890 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerStarted","Data":"fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.432956 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerStarted","Data":"585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.441816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f78f448-6577-48d1-b077-01e42c14758c","Type":"ContainerStarted","Data":"20f4a8e57e76cc360b7850b5367d2684c122f4c9c4b3092787b94b72265cf7db"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.446459 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.446513 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.457922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.472387 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.708924209 podStartE2EDuration="15.472365912s" podCreationTimestamp="2026-02-16 13:10:44 +0000 UTC" firstStartedPulling="2026-02-16 13:10:45.636862201 +0000 UTC m=+1073.013210922" lastFinishedPulling="2026-02-16 13:10:58.400303894 +0000 UTC m=+1085.776652625" observedRunningTime="2026-02-16 13:10:59.464590537 +0000 UTC m=+1086.840939258" watchObservedRunningTime="2026-02-16 13:10:59.472365912 +0000 UTC m=+1086.848714633" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.478737 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.485135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.504557 4740 scope.go:117] "RemoveContainer" containerID="c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.559849 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.585714 4740 scope.go:117] "RemoveContainer" containerID="c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.608392 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.625478 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626445 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626467 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626477 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626499 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626505 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626522 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626530 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626709 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626725 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626735 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626745 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.628767 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.633352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.633575 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.653201 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.667060 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.679217 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.735660 4740 scope.go:117] "RemoveContainer" containerID="72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753317 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855078 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.861503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.861840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.863734 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.865095 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.871782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.881697 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.890626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.137312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466340 4740 generic.go:334] "Generic (PLEG): container finished" podID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerID="20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466767 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerDied","Data":"20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerStarted","Data":"c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.473429 4740 generic.go:334] "Generic (PLEG): container finished" podID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerID="344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.473481 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerDied","Data":"344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.478587 4740 generic.go:334] "Generic (PLEG): container finished" podID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerID="4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.478663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerDied","Data":"4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488336 4740 generic.go:334] "Generic (PLEG): container finished" podID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerID="04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerDied","Data":"04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerStarted","Data":"1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502415 4740 generic.go:334] "Generic (PLEG): container finished" podID="0bf48619-6b39-4215-950a-f8da809dcc11" containerID="d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502565 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerDied","Data":"d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerStarted","Data":"264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.509310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"5f5c1f53d7e962d3996e4cae85f99250cc4b43ee0e746058e8d85d62d6ec4a0d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.528116 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerID="04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.529270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerDied","Data":"04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.600309 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.600568 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" containerID="cri-o://a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" gracePeriod=30 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.601023 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" containerID="cri-o://78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" gracePeriod=30 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.712378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.296617 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" path="/var/lib/kubelet/pods/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc/volumes" Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.543846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.544223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.545631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"d5a1cd3c9bdd8394f36c620fccd25a64965afba1d80863bfc2e95f488a0fc03e"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.551259 4740 generic.go:334] "Generic (PLEG): container finished" podID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" exitCode=143 Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.551407 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.573471 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.57344775 podStartE2EDuration="11.57344775s" podCreationTimestamp="2026-02-16 13:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:01.569271789 +0000 UTC m=+1088.945620510" watchObservedRunningTime="2026-02-16 13:11:01.57344775 +0000 UTC m=+1088.949796481" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.168176 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.322078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.322631 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.324250 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dc528c1-14c9-4bb4-a6f8-621fc066e98a" (UID: "2dc528c1-14c9-4bb4-a6f8-621fc066e98a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.335136 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722" (OuterVolumeSpecName: "kube-api-access-b4722") pod "2dc528c1-14c9-4bb4-a6f8-621fc066e98a" (UID: "2dc528c1-14c9-4bb4-a6f8-621fc066e98a"). InnerVolumeSpecName "kube-api-access-b4722". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.424839 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.424875 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.475514 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.483495 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.486710 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.491280 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.514429 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568596 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerDied","Data":"264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568632 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568685 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.571667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573585 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerDied","Data":"c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573608 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.574931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerDied","Data":"6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.575045 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.574952 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576337 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerDied","Data":"1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576360 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576400 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577477 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerDied","Data":"fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577497 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577531 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583277 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerDied","Data":"585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583301 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583283 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630277 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"b93273db-db1d-4c4b-85ad-2d87065c42f4\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"e2ec561b-87d9-418d-9376-c48bb31d46f9\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630453 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630481 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"0bf48619-6b39-4215-950a-f8da809dcc11\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630509 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"0bf48619-6b39-4215-950a-f8da809dcc11\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"b93273db-db1d-4c4b-85ad-2d87065c42f4\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630616 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"e2ec561b-87d9-418d-9376-c48bb31d46f9\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630681 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.631941 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce83ec9b-39d5-4bf9-b343-d3f06f886841" (UID: "ce83ec9b-39d5-4bf9-b343-d3f06f886841"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.632584 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b93273db-db1d-4c4b-85ad-2d87065c42f4" (UID: "b93273db-db1d-4c4b-85ad-2d87065c42f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.633071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2ec561b-87d9-418d-9376-c48bb31d46f9" (UID: "e2ec561b-87d9-418d-9376-c48bb31d46f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.634684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c28029f1-eca0-4cd5-95b3-774c21d6d0ed" (UID: "c28029f1-eca0-4cd5-95b3-774c21d6d0ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.635520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bf48619-6b39-4215-950a-f8da809dcc11" (UID: "0bf48619-6b39-4215-950a-f8da809dcc11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.639991 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs" (OuterVolumeSpecName: "kube-api-access-8c5xs") pod "b93273db-db1d-4c4b-85ad-2d87065c42f4" (UID: "b93273db-db1d-4c4b-85ad-2d87065c42f4"). InnerVolumeSpecName "kube-api-access-8c5xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.639998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r" (OuterVolumeSpecName: "kube-api-access-6kh9r") pod "0bf48619-6b39-4215-950a-f8da809dcc11" (UID: "0bf48619-6b39-4215-950a-f8da809dcc11"). InnerVolumeSpecName "kube-api-access-6kh9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640081 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p" (OuterVolumeSpecName: "kube-api-access-7nr5p") pod "ce83ec9b-39d5-4bf9-b343-d3f06f886841" (UID: "ce83ec9b-39d5-4bf9-b343-d3f06f886841"). InnerVolumeSpecName "kube-api-access-7nr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z" (OuterVolumeSpecName: "kube-api-access-2pn2z") pod "c28029f1-eca0-4cd5-95b3-774c21d6d0ed" (UID: "c28029f1-eca0-4cd5-95b3-774c21d6d0ed"). InnerVolumeSpecName "kube-api-access-2pn2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj" (OuterVolumeSpecName: "kube-api-access-n4frj") pod "e2ec561b-87d9-418d-9376-c48bb31d46f9" (UID: "e2ec561b-87d9-418d-9376-c48bb31d46f9"). InnerVolumeSpecName "kube-api-access-n4frj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734732 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734760 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734770 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734779 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734789 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734797 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734825 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734845 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734859 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734871 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.821354 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894211 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894446 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84f8c7948d-wxf52" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" containerID="cri-o://1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" gracePeriod=30 Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894672 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84f8c7948d-wxf52" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" containerID="cri-o://90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.594340 4740 generic.go:334] "Generic (PLEG): container finished" podID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerID="90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" exitCode=0 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.594399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9"} Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.597009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae"} Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668413 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668675 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" containerID="cri-o://e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668741 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" containerID="cri-o://6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.803375 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.345149 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480447 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480582 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480650 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480668 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.481395 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs" (OuterVolumeSpecName: "logs") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.481715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.493450 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.497145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts" (OuterVolumeSpecName: "scripts") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.499017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g" (OuterVolumeSpecName: "kube-api-access-25p4g") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "kube-api-access-25p4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.524646 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.524781 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.528039 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.555847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data" (OuterVolumeSpecName: "config-data") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.562488 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582530 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582562 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582593 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582604 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582618 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582626 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582634 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582641 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.601929 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607620 4740 generic.go:334] "Generic (PLEG): container finished" podID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" exitCode=0 Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607683 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607701 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.608362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"5b0e438309976f20e6cf23ad9e1052b831e4bb9fad154e2636f7ef4afee681a4"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.608402 4740 scope.go:117] "RemoveContainer" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.611386 4740 generic.go:334] "Generic (PLEG): container finished" podID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" exitCode=143 Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.611463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.684595 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.687103 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.703462 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.714629 4740 scope.go:117] "RemoveContainer" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.722678 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723345 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723361 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723371 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723381 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723396 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723403 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723411 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723418 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723438 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723445 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723456 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723465 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723485 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723492 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723529 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723537 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723704 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723718 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723726 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723737 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723749 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723763 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723773 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723785 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.762943 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.767360 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.767655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.772562 4740 scope.go:117] "RemoveContainer" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.780893 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": container with ID starting with 78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b not found: ID does not exist" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.781590 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} err="failed to get container status \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": rpc error: code = NotFound desc = could not find container \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": container with ID starting with 78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b not found: ID does not exist" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.781707 4740 scope.go:117] "RemoveContainer" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.784083 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.792508 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": container with ID starting with a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d not found: ID does not exist" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.792569 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} err="failed to get container status \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": rpc error: code = NotFound desc = could not find container \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": container with ID starting with a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d not found: ID does not exist" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.892991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893254 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893375 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893553 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893768 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.894401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.997840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998415 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.999499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.000175 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.002396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.011117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.011416 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.012114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.019516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.046668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.060991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.103421 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.294697 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" path="/var/lib/kubelet/pods/444d5830-ca5b-426e-a7da-785e35ae1e65/volumes" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40"} Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623552 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" containerID="cri-o://b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623565 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" containerID="cri-o://14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623609 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623638 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" containerID="cri-o://8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623689 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" containerID="cri-o://915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.710973 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.788527633 podStartE2EDuration="6.710923118s" podCreationTimestamp="2026-02-16 13:10:59 +0000 UTC" firstStartedPulling="2026-02-16 13:11:00.740755612 +0000 UTC m=+1088.117104333" lastFinishedPulling="2026-02-16 13:11:04.663151097 +0000 UTC m=+1092.039499818" observedRunningTime="2026-02-16 13:11:05.645134714 +0000 UTC m=+1093.021483445" watchObservedRunningTime="2026-02-16 13:11:05.710923118 +0000 UTC m=+1093.087271839" Feb 16 13:11:05 crc kubenswrapper[4740]: W0216 13:11:05.711540 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8535644_0ebc_4cc6_bbc5_a5ef02f30685.slice/crio-779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f WatchSource:0}: Error finding container 779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f: Status 404 returned error can't find the container with id 779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.712750 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.876547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.126911 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637452 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" exitCode=0 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637785 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" exitCode=2 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637795 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" exitCode=0 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.642350 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"fd933378fd3e26ee2a045b2089a447b28202255138a0600775787bfcc5f7843d"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.642392 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.373809 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.375153 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.376644 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.376940 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.379313 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.381122 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.388179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453944 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454025 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454141 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454167 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454543 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.456322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.460011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs" (OuterVolumeSpecName: "logs") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.465683 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.474095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts" (OuterVolumeSpecName: "scripts") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.477788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl" (OuterVolumeSpecName: "kube-api-access-k8spl") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "kube-api-access-k8spl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.505657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.540231 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558314 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558529 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558627 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558664 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558674 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558684 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558731 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558741 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558749 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.562173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data" (OuterVolumeSpecName: "config-data") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.565920 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.568290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.570910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.584790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.614468 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.661366 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.661407 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.674964 4740 generic.go:334] "Generic (PLEG): container finished" podID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" exitCode=0 Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675046 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"b1f805b3f42130f9ec256249b24cf294052db79347125cb78ef7bd761396a42c"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675132 4740 scope.go:117] "RemoveContainer" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.681186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"bb78e5415ed170cd65d1da7cf4ac8e474649a626d35a5fb6c378aa0900606bd5"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.694064 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.712293 4740 scope.go:117] "RemoveContainer" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.719535 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.719514701 podStartE2EDuration="3.719514701s" podCreationTimestamp="2026-02-16 13:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:07.702195944 +0000 UTC m=+1095.078544685" watchObservedRunningTime="2026-02-16 13:11:07.719514701 +0000 UTC m=+1095.095863422" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.731497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.751756 4740 scope.go:117] "RemoveContainer" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.756220 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": container with ID starting with 6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423 not found: ID does not exist" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.756286 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} err="failed to get container status \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": rpc error: code = NotFound desc = could not find container \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": container with ID starting with 6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423 not found: ID does not exist" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.756319 4740 scope.go:117] "RemoveContainer" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.756976 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": container with ID starting with e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6 not found: ID does not exist" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.757004 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} err="failed to get container status \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": rpc error: code = NotFound desc = could not find container \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": container with ID starting with e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6 not found: ID does not exist" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.757038 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.771623 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.772044 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772062 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.772089 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772097 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772278 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772297 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.773315 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.775006 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.775527 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.787253 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864763 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864843 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864924 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864954 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.865013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.865066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.966700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967079 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967133 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967221 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.969203 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.970995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.971479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.977261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.980256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.998459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.015329 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.095572 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.254655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:08 crc kubenswrapper[4740]: W0216 13:11:08.256319 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fce641e_1b76_4b99_a99d_9a0ccbf9680e.slice/crio-10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd WatchSource:0}: Error finding container 10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd: Status 404 returned error can't find the container with id 10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.704001 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerStarted","Data":"10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd"} Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.707211 4740 generic.go:334] "Generic (PLEG): container finished" podID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerID="1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" exitCode=0 Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.708007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5"} Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.774059 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.032899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.086994 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087093 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087445 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087496 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.097680 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.098638 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt" (OuterVolumeSpecName: "kube-api-access-5cppt") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "kube-api-access-5cppt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.169713 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.182001 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.187692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config" (OuterVolumeSpecName: "config") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.197890 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198499 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198515 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198524 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198532 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.341297 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" path="/var/lib/kubelet/pods/ea4149c3-a18d-46e3-86b1-8a60e9127244/volumes" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"1506e84718fd32b51421d0c20379f7ab72db7c5398d1bae1271890a3cd491379"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718982 4740 scope.go:117] "RemoveContainer" containerID="90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718601 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.728663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"f230f1aef3a88a687ccdc2f9c83e07dc9225aedc4710c74e61723da15a524197"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.728710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"df3544372b26e6e50da341e4f25973095e526c28cdd3b97eea2ef30a428556ae"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.750040 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.752717 4740 scope.go:117] "RemoveContainer" containerID="1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.762331 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.491045 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656664 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656980 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657020 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657043 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657083 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.658212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs" (OuterVolumeSpecName: "logs") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.663704 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.663755 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg" (OuterVolumeSpecName: "kube-api-access-wsswg") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "kube-api-access-wsswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.696614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts" (OuterVolumeSpecName: "scripts") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.715763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.719337 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data" (OuterVolumeSpecName: "config-data") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.728479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.745099 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"001873da5d2ed97221d998867ec288d071967aab9212258303cfac59327ae318"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.749972 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" exitCode=137 Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750021 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"06bf25d33138128d15c50d580ea5273787a0565c881e9530d7786cb52837cf0e"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750061 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750369 4740 scope.go:117] "RemoveContainer" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756552 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" exitCode=0 Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756626 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760017 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760042 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760051 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760060 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760070 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760081 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760089 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.770020 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.770004079 podStartE2EDuration="3.770004079s" podCreationTimestamp="2026-02-16 13:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:10.765777556 +0000 UTC m=+1098.142126277" watchObservedRunningTime="2026-02-16 13:11:10.770004079 +0000 UTC m=+1098.146352800" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.799476 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.804085 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.810957 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.922144 4740 scope.go:117] "RemoveContainer" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.943964 4740 scope.go:117] "RemoveContainer" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: E0216 13:11:10.944463 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": container with ID starting with e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8 not found: ID does not exist" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944506 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} err="failed to get container status \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": rpc error: code = NotFound desc = could not find container \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": container with ID starting with e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8 not found: ID does not exist" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944531 4740 scope.go:117] "RemoveContainer" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: E0216 13:11:10.944947 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": container with ID starting with 450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0 not found: ID does not exist" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944981 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} err="failed to get container status \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": rpc error: code = NotFound desc = could not find container \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": container with ID starting with 450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0 not found: ID does not exist" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963090 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963158 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963192 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963407 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963426 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963449 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963511 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963734 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.964320 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.964339 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.966268 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf" (OuterVolumeSpecName: "kube-api-access-ggbsf") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "kube-api-access-ggbsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.966611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts" (OuterVolumeSpecName: "scripts") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.005407 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.044381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066083 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066116 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066132 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066144 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.089125 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data" (OuterVolumeSpecName: "config-data") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.168446 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.297620 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" path="/var/lib/kubelet/pods/4bc5b698-8fd6-4919-a02b-eb74665d83e0/volumes" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.298395 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" path="/var/lib/kubelet/pods/9b0f3f50-6ea0-4ee0-af75-c020e91c8495/volumes" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.766419 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.798945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.806635 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.821739 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822436 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822476 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822494 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822502 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822516 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822523 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822552 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822563 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822572 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822594 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822600 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822610 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822617 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822902 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822919 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822938 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822950 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822965 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822976 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822989 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822998 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.824958 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.827208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.827455 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.840093 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981308 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082776 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.089045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.090179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.091257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.099610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.100091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.139978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.989280 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:13 crc kubenswrapper[4740]: I0216 13:11:13.297274 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" path="/var/lib/kubelet/pods/772c2a1c-acd4-4227-829d-e4235742b5f4/volumes" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.104027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.104372 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.135770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.148128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.575644 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.575708 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.807629 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.807668 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.818516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerStarted","Data":"7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b"} Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.838639 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" podStartSLOduration=1.5292308559999999 podStartE2EDuration="9.838622298s" podCreationTimestamp="2026-02-16 13:11:07 +0000 UTC" firstStartedPulling="2026-02-16 13:11:08.259495349 +0000 UTC m=+1095.635844070" lastFinishedPulling="2026-02-16 13:11:16.568886791 +0000 UTC m=+1103.945235512" observedRunningTime="2026-02-16 13:11:16.835121317 +0000 UTC m=+1104.211470038" watchObservedRunningTime="2026-02-16 13:11:16.838622298 +0000 UTC m=+1104.214971019" Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.974001 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.829504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70"} Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.829874 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc"} Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.832424 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.833848 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.864342 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.097573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.097622 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.131861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.139600 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.844543 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939"} Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.844903 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.845456 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:19 crc kubenswrapper[4740]: I0216 13:11:19.852769 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b"} Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.828220 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.844115 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd"} Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864497 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" containerID="cri-o://3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864552 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" containerID="cri-o://f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864530 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" containerID="cri-o://10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864547 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" containerID="cri-o://a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.911315 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.789800044 podStartE2EDuration="9.911299242s" podCreationTimestamp="2026-02-16 13:11:11 +0000 UTC" firstStartedPulling="2026-02-16 13:11:16.979566252 +0000 UTC m=+1104.355914973" lastFinishedPulling="2026-02-16 13:11:20.10106545 +0000 UTC m=+1107.477414171" observedRunningTime="2026-02-16 13:11:20.908891266 +0000 UTC m=+1108.285239977" watchObservedRunningTime="2026-02-16 13:11:20.911299242 +0000 UTC m=+1108.287647963" Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.875654 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" exitCode=0 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876008 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" exitCode=2 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876024 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" exitCode=0 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.875859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd"} Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876928 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b"} Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.918989 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" exitCode=0 Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919052 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919694 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc" Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.920717 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.061871 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.061938 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062006 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062074 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062115 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062739 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062900 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.075910 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts" (OuterVolumeSpecName: "scripts") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.078199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p" (OuterVolumeSpecName: "kube-api-access-9ps5p") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "kube-api-access-9ps5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.089925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.155874 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data" (OuterVolumeSpecName: "config-data") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164048 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164083 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164098 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164108 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164119 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164130 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.170018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.266136 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.931163 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.964606 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.972470 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992206 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992683 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992719 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992738 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992746 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992771 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992779 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992791 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992799 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993066 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993085 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993106 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993125 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.995238 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.999445 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.999816 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.014420 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117183 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117354 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218697 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.219355 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.219598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.224875 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.225119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.225709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.228715 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.242568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.297961 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" path="/var/lib/kubelet/pods/95a355c5-9192-49fa-9d5d-ca9d1cba83c5/volumes" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.320612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.767942 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:27 crc kubenswrapper[4740]: W0216 13:11:27.770724 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7d441d_037c_4d9b_a593_295360acb873.slice/crio-ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c WatchSource:0}: Error finding container ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c: Status 404 returned error can't find the container with id ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.940645 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerID="7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b" exitCode=0 Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.940718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerDied","Data":"7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b"} Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.942146 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c"} Feb 16 13:11:28 crc kubenswrapper[4740]: I0216 13:11:28.955992 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f"} Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.309566 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459532 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459635 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459741 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459779 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.472044 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn" (OuterVolumeSpecName: "kube-api-access-qjmbn") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "kube-api-access-qjmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.472154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts" (OuterVolumeSpecName: "scripts") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.486759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data" (OuterVolumeSpecName: "config-data") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.508470 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561396 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561430 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561440 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561452 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.965996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerDied","Data":"10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd"} Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.966039 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.966079 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.128922 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: E0216 13:11:30.129675 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.129700 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.129975 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.130679 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.133322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.134489 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.140360 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379591 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379682 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.384235 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.387262 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.396915 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.495578 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.922805 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: W0216 13:11:30.930961 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod177b2d8c_29ab_49ea_8509_12b489123ad9.slice/crio-e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91 WatchSource:0}: Error finding container e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91: Status 404 returned error can't find the container with id e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91 Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.979026 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.979078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.980328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerStarted","Data":"e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.990333 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerStarted","Data":"29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068"} Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992576 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992410 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" containerID="cri-o://29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" gracePeriod=30 Feb 16 13:11:32 crc kubenswrapper[4740]: I0216 13:11:32.021911 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.021860579 podStartE2EDuration="2.021860579s" podCreationTimestamp="2026-02-16 13:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:32.015735866 +0000 UTC m=+1119.392084587" watchObservedRunningTime="2026-02-16 13:11:32.021860579 +0000 UTC m=+1119.398209320" Feb 16 13:11:32 crc kubenswrapper[4740]: I0216 13:11:32.736436 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.005264 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474"} Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.006740 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.030979 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.772290543 podStartE2EDuration="7.030960403s" podCreationTimestamp="2026-02-16 13:11:26 +0000 UTC" firstStartedPulling="2026-02-16 13:11:27.772795233 +0000 UTC m=+1115.149143954" lastFinishedPulling="2026-02-16 13:11:32.031465093 +0000 UTC m=+1119.407813814" observedRunningTime="2026-02-16 13:11:33.027942137 +0000 UTC m=+1120.404290878" watchObservedRunningTime="2026-02-16 13:11:33.030960403 +0000 UTC m=+1120.407309134" Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014146 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" containerID="cri-o://af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014726 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" containerID="cri-o://15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014852 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" containerID="cri-o://b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.015046 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" containerID="cri-o://b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" gracePeriod=30 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026087 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026470 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" exitCode=2 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026481 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026491 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026576 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.275586 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373543 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373843 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373952 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.374403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.374529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.379207 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts" (OuterVolumeSpecName: "scripts") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.384018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9" (OuterVolumeSpecName: "kube-api-access-txnw9") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "kube-api-access-txnw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.401671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.458952 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475902 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475941 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475956 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475967 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475982 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475997 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.478229 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data" (OuterVolumeSpecName: "config-data") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.577584 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c"} Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040347 4740 scope.go:117] "RemoveContainer" containerID="15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040371 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.059171 4740 scope.go:117] "RemoveContainer" containerID="b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.079932 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.082364 4740 scope.go:117] "RemoveContainer" containerID="b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.091261 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.105608 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.105965 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.105983 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106015 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106021 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106034 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106056 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106061 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106248 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106262 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106280 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106291 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.108868 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.110618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.110716 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.119503 4740 scope.go:117] "RemoveContainer" containerID="af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.119691 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291220 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291396 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291659 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291841 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394512 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.396651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.396923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.399501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.400083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.401450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.410198 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.413126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.427508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.899152 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: W0216 13:11:36.902657 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6a38c8_9c96_44fc_a4cc_247e314350b0.slice/crio-8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102 WatchSource:0}: Error finding container 8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102: Status 404 returned error can't find the container with id 8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102 Feb 16 13:11:37 crc kubenswrapper[4740]: I0216 13:11:37.050484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102"} Feb 16 13:11:37 crc kubenswrapper[4740]: I0216 13:11:37.291899 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7d441d-037c-4d9b-a593-295360acb873" path="/var/lib/kubelet/pods/9f7d441d-037c-4d9b-a593-295360acb873/volumes" Feb 16 13:11:38 crc kubenswrapper[4740]: I0216 13:11:38.076747 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} Feb 16 13:11:39 crc kubenswrapper[4740]: I0216 13:11:39.087664 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} Feb 16 13:11:39 crc kubenswrapper[4740]: I0216 13:11:39.088023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.498855 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.502060 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.503644 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.503686 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.107792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.107999 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.128333 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.979587201 podStartE2EDuration="5.128311647s" podCreationTimestamp="2026-02-16 13:11:36 +0000 UTC" firstStartedPulling="2026-02-16 13:11:36.906037016 +0000 UTC m=+1124.282385737" lastFinishedPulling="2026-02-16 13:11:40.054761462 +0000 UTC m=+1127.431110183" observedRunningTime="2026-02-16 13:11:41.126630944 +0000 UTC m=+1128.502979685" watchObservedRunningTime="2026-02-16 13:11:41.128311647 +0000 UTC m=+1128.504660378" Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.499408 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.500952 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.502575 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.502610 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575185 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575265 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575321 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.576416 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.576520 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" gracePeriod=600 Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163015 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" exitCode=0 Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163520 4740 scope.go:117] "RemoveContainer" containerID="edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.502418 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.506367 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.507990 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.508067 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.498779 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.501312 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.502701 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.502825 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.499059 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.501364 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.503352 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.503412 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.327622 4740 generic.go:334] "Generic (PLEG): container finished" podID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" exitCode=137 Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.327696 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerDied","Data":"29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068"} Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.451417 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.595923 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.596017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.596142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.604092 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6" (OuterVolumeSpecName: "kube-api-access-vlss6") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "kube-api-access-vlss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.622911 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data" (OuterVolumeSpecName: "config-data") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.626487 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697517 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697565 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerDied","Data":"e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91"} Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337156 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337451 4740 scope.go:117] "RemoveContainer" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.359498 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.369557 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.396943 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: E0216 13:12:03.397993 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.398018 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.398605 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.399480 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.403206 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.411426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.412846 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.512986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.513074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.513114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615474 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.622575 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.630406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.633383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.739516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:04 crc kubenswrapper[4740]: I0216 13:12:04.199554 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:04 crc kubenswrapper[4740]: I0216 13:12:04.354460 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"07256285-a907-4822-80dc-b5f5866d437f","Type":"ContainerStarted","Data":"bb82e6aade05f9fff17a43670718f2e4c7d6a343b3ebcdf2241c634c7a6e945f"} Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.301795 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" path="/var/lib/kubelet/pods/177b2d8c-29ab-49ea-8509-12b489123ad9/volumes" Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.381117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"07256285-a907-4822-80dc-b5f5866d437f","Type":"ContainerStarted","Data":"6fae93798a52b2b1b11faf82fd2a15be794c0723a2bd5840daef1bd25b3955b4"} Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.382946 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.413968 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.413946279 podStartE2EDuration="2.413946279s" podCreationTimestamp="2026-02-16 13:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:05.403736656 +0000 UTC m=+1152.780085387" watchObservedRunningTime="2026-02-16 13:12:05.413946279 +0000 UTC m=+1152.790295010" Feb 16 13:12:06 crc kubenswrapper[4740]: I0216 13:12:06.435962 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:12:09 crc kubenswrapper[4740]: I0216 13:12:09.913625 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:09 crc kubenswrapper[4740]: I0216 13:12:09.914344 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" containerID="cri-o://393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" gracePeriod=30 Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425688 4740 generic.go:334] "Generic (PLEG): container finished" podID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerID="393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" exitCode=2 Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425942 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerDied","Data":"393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb"} Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerDied","Data":"866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d"} Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.481754 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.560845 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.576129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs" (OuterVolumeSpecName: "kube-api-access-fpkcs") pod "dffdca64-bf57-49ca-9d8d-c6c752e59a37" (UID: "dffdca64-bf57-49ca-9d8d-c6c752e59a37"). InnerVolumeSpecName "kube-api-access-fpkcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.662714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.434071 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.457112 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.470072 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.512655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: E0216 13:12:11.513922 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.513943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.514335 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.515250 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.517136 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.517796 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.523749 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.588841 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.588953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.589006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.589131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.698850 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.699463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.704285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.714526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804334 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804673 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" containerID="cri-o://c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804703 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" containerID="cri-o://0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804792 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" containerID="cri-o://4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.805020 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" containerID="cri-o://6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.833689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.272464 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.446559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05c7ea6d-5a24-4b21-851c-e7d51fa61a38","Type":"ContainerStarted","Data":"f2ae81f20227596beb1c048c192eb62e88f8dbb91ec4dc48019e683a75a9d3ba"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449941 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" exitCode=0 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449964 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" exitCode=2 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449971 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" exitCode=0 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.450004 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.450013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.293222 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" path="/var/lib/kubelet/pods/dffdca64-bf57-49ca-9d8d-c6c752e59a37/volumes" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.460246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05c7ea6d-5a24-4b21-851c-e7d51fa61a38","Type":"ContainerStarted","Data":"d7eac0d86b4b1671eb3ce7edb75266b95feed3d4860ce524d31e6b6e05ac8c57"} Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.461283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.477478 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.125804058 podStartE2EDuration="2.477460527s" podCreationTimestamp="2026-02-16 13:12:11 +0000 UTC" firstStartedPulling="2026-02-16 13:12:12.2788761 +0000 UTC m=+1159.655224821" lastFinishedPulling="2026-02-16 13:12:12.630532569 +0000 UTC m=+1160.006881290" observedRunningTime="2026-02-16 13:12:13.475116863 +0000 UTC m=+1160.851465594" watchObservedRunningTime="2026-02-16 13:12:13.477460527 +0000 UTC m=+1160.853809248" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.771657 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.268318 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.273542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.276105 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.276268 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.285540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.438018 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444539 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.453576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.463475 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.482838 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.492571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512113 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" exitCode=0 Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512467 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102"} Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512630 4740 scope.go:117] "RemoveContainer" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.542928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543281 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543293 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543327 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543333 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543344 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543350 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543372 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543378 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543524 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543544 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543557 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543567 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.546616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550877 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551067 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551155 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.552183 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.553109 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.559373 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.576380 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.578759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts" (OuterVolumeSpecName: "scripts") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.585529 4740 scope.go:117] "RemoveContainer" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.585562 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24" (OuterVolumeSpecName: "kube-api-access-mrd24") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "kube-api-access-mrd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.600744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.647761 4740 scope.go:117] "RemoveContainer" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.655907 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656061 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656131 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656142 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656153 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.678763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.687681 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.701957 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.708315 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.714794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725126 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725159 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.734770 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.736182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.736754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.737396 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.741007 4740 scope.go:117] "RemoveContainer" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.750024 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757365 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757435 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757464 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757746 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.758592 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.762713 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.763645 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.764177 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.790612 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.790681 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.800224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.799118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data" (OuterVolumeSpecName: "config-data") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.841917 4740 scope.go:117] "RemoveContainer" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.843824 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": container with ID starting with 0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991 not found: ID does not exist" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.843862 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} err="failed to get container status \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": rpc error: code = NotFound desc = could not find container \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": container with ID starting with 0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991 not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.843888 4740 scope.go:117] "RemoveContainer" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844417 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": container with ID starting with 6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706 not found: ID does not exist" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844433 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} err="failed to get container status \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": rpc error: code = NotFound desc = could not find container \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": container with ID starting with 6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706 not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844446 4740 scope.go:117] "RemoveContainer" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844748 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": container with ID starting with 4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d not found: ID does not exist" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844763 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} err="failed to get container status \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": rpc error: code = NotFound desc = could not find container \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": container with ID starting with 4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844775 4740 scope.go:117] "RemoveContainer" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844963 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": container with ID starting with c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f not found: ID does not exist" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844977 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} err="failed to get container status \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": rpc error: code = NotFound desc = could not find container \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": container with ID starting with c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.863931 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864722 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865119 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865624 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865781 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865794 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.867841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.868043 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.868393 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.871598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.871953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.872173 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.882560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.895124 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.899032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.899469 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.903891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971898 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.972061 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973160 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.974000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.975082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.979396 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.989833 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.096605 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.131364 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.143686 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.153497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.157910 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.163789 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.219485 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.248078 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.248225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.257352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.257637 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.269386 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.273174 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292669 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292719 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.304000 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" path="/var/lib/kubelet/pods/ef6a38c8-9c96-44fc-a4cc-247e314350b0/volumes" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395205 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.397138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.406676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.408258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.408497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.413525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.429893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.448030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.558025 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.561361 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb162a0_b3b3_4b3a_b8f2_4d3a2997437f.slice/crio-049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc WatchSource:0}: Error finding container 049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc: Status 404 returned error can't find the container with id 049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.562380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerStarted","Data":"b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd"} Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.593287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.827686 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.838522 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620a5f5a_99ec_4801_ba17_d9a5fa7ea7ac.slice/crio-8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103 WatchSource:0}: Error finding container 8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103: Status 404 returned error can't find the container with id 8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103 Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.839744 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.910324 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.911915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.915779 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.916413 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.940155 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.989100 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod013bed9b_6a31_4094_bb95_addbf3f4bd01.slice/crio-3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2 WatchSource:0}: Error finding container 3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2: Status 404 returned error can't find the container with id 3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2 Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.989346 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023434 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023591 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: W0216 13:12:16.110142 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c1f9b3_2d24_4a15_a9ed_1e580d07368d.slice/crio-0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4 WatchSource:0}: Error finding container 0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4: Status 404 returned error can't find the container with id 0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.111788 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125304 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.130900 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.135016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.139883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.144542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.249287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.252847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:16 crc kubenswrapper[4740]: W0216 13:12:16.268133 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7722219_9b84_4adf_bf81_c4ac8a0d9d2c.slice/crio-b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593 WatchSource:0}: Error finding container b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593: Status 404 returned error can't find the container with id b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.586984 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerStarted","Data":"049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.588975 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerStarted","Data":"3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.591524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerStarted","Data":"0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.595175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597198 4740 generic.go:334] "Generic (PLEG): container finished" podID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerID="427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207" exitCode=0 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerStarted","Data":"0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.604454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.609494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"e04f019c32476631dfc276b161c2d8004e68193c635f2f2f3e656fded80b6609"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.665351 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8f7gd" podStartSLOduration=2.665329099 podStartE2EDuration="2.665329099s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:16.611206002 +0000 UTC m=+1163.987554733" watchObservedRunningTime="2026-02-16 13:12:16.665329099 +0000 UTC m=+1164.041677820" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.702233 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.624275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.626426 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerStarted","Data":"8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.627923 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.637212 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerStarted","Data":"2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.637280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerStarted","Data":"53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.680398 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" podStartSLOduration=3.680357508 podStartE2EDuration="3.680357508s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:17.655148513 +0000 UTC m=+1165.031497244" watchObservedRunningTime="2026-02-16 13:12:17.680357508 +0000 UTC m=+1165.056706229" Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.697228 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8crw8" podStartSLOduration=2.697207759 podStartE2EDuration="2.697207759s" podCreationTimestamp="2026-02-16 13:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:17.677444136 +0000 UTC m=+1165.053792857" watchObservedRunningTime="2026-02-16 13:12:17.697207759 +0000 UTC m=+1165.073556480" Feb 16 13:12:18 crc kubenswrapper[4740]: I0216 13:12:18.812928 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:18 crc kubenswrapper[4740]: I0216 13:12:18.839593 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.658818 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerStarted","Data":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.661121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerStarted","Data":"23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.661223 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.668210 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.675639 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.675687 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677623 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677683 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" containerID="cri-o://5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677725 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" containerID="cri-o://5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.691445 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.486322844 podStartE2EDuration="5.691427676s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.568115538 +0000 UTC m=+1162.944464259" lastFinishedPulling="2026-02-16 13:12:18.77322037 +0000 UTC m=+1166.149569091" observedRunningTime="2026-02-16 13:12:19.683101034 +0000 UTC m=+1167.059449755" watchObservedRunningTime="2026-02-16 13:12:19.691427676 +0000 UTC m=+1167.067776397" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.712013 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.780831578 podStartE2EDuration="5.711994981s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.845944078 +0000 UTC m=+1163.222292809" lastFinishedPulling="2026-02-16 13:12:18.777107491 +0000 UTC m=+1166.153456212" observedRunningTime="2026-02-16 13:12:19.705002122 +0000 UTC m=+1167.081350863" watchObservedRunningTime="2026-02-16 13:12:19.711994981 +0000 UTC m=+1167.088343702" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.755372 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.811418723 podStartE2EDuration="5.755355814s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.870237715 +0000 UTC m=+1163.246586436" lastFinishedPulling="2026-02-16 13:12:18.814174806 +0000 UTC m=+1166.190523527" observedRunningTime="2026-02-16 13:12:19.732132754 +0000 UTC m=+1167.108481475" watchObservedRunningTime="2026-02-16 13:12:19.755355814 +0000 UTC m=+1167.131704545" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.760027 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.934487964 podStartE2EDuration="5.76000892s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.991412516 +0000 UTC m=+1163.367761247" lastFinishedPulling="2026-02-16 13:12:18.816933482 +0000 UTC m=+1166.193282203" observedRunningTime="2026-02-16 13:12:19.750255734 +0000 UTC m=+1167.126604455" watchObservedRunningTime="2026-02-16 13:12:19.76000892 +0000 UTC m=+1167.136357641" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.980237 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.132391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.132433 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.144775 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.507020 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651870 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.652073 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs" (OuterVolumeSpecName: "logs") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.652685 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.658513 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv" (OuterVolumeSpecName: "kube-api-access-zctqv") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "kube-api-access-zctqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.683541 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689562 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689647 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689539 4740 generic.go:334] "Generic (PLEG): container finished" podID="7687ffed-8f01-4301-a20c-5feba63ac079" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" exitCode=0 Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689936 4740 generic.go:334] "Generic (PLEG): container finished" podID="7687ffed-8f01-4301-a20c-5feba63ac079" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" exitCode=143 Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.690003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"e04f019c32476631dfc276b161c2d8004e68193c635f2f2f3e656fded80b6609"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.696228 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.701956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data" (OuterVolumeSpecName: "config-data") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.724006 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747130 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: E0216 13:12:20.747568 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747600 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} err="failed to get container status \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747629 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: E0216 13:12:20.748176 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748198 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} err="failed to get container status \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748211 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748588 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} err="failed to get container status \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748641 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.749087 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} err="failed to get container status \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754734 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754765 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754775 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.027440 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.039726 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.061579 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: E0216 13:12:21.062283 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062415 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: E0216 13:12:21.062500 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062911 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.063007 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.067089 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.069376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.069661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.080936 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.164135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.164350 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.266868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.272542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.272562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.288528 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.288953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.297303 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" path="/var/lib/kubelet/pods/7687ffed-8f01-4301-a20c-5feba63ac079/volumes" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.388007 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.856595 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.906671 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"0641bfa13271f6c66fab9d171fca52a4133710b322c360f7da95c74871437730"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.723288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.723665 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.748447 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.748424143 podStartE2EDuration="1.748424143s" podCreationTimestamp="2026-02-16 13:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:22.738512131 +0000 UTC m=+1170.114860852" watchObservedRunningTime="2026-02-16 13:12:22.748424143 +0000 UTC m=+1170.124772864" Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.770593 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.571676365 podStartE2EDuration="7.770571638s" podCreationTimestamp="2026-02-16 13:12:15 +0000 UTC" firstStartedPulling="2026-02-16 13:12:16.2737363 +0000 UTC m=+1163.650085021" lastFinishedPulling="2026-02-16 13:12:21.472631583 +0000 UTC m=+1168.848980294" observedRunningTime="2026-02-16 13:12:22.76140425 +0000 UTC m=+1170.137752981" watchObservedRunningTime="2026-02-16 13:12:22.770571638 +0000 UTC m=+1170.146920359" Feb 16 13:12:23 crc kubenswrapper[4740]: I0216 13:12:23.734600 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerID="0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4" exitCode=0 Feb 16 13:12:23 crc kubenswrapper[4740]: I0216 13:12:23.734801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerDied","Data":"0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4"} Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.747285 4740 generic.go:334] "Generic (PLEG): container finished" podID="975c922d-b91a-4cf6-9739-0d478d19765a" containerID="2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5" exitCode=0 Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.747446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerDied","Data":"2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5"} Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.979919 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.005065 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.081702 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.097891 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.098142 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.160636 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162745 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162832 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.171057 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj" (OuterVolumeSpecName: "kube-api-access-vz5wj") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "kube-api-access-vz5wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.180518 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts" (OuterVolumeSpecName: "scripts") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.204976 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data" (OuterVolumeSpecName: "config-data") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.211004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.230244 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.230466 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" containerID="cri-o://3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" gracePeriod=10 Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265474 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265721 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265732 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265741 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.764700 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783740 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerDied","Data":"b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783832 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783959 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811124 4740 generic.go:334] "Generic (PLEG): container finished" podID="fecd834c-f149-401b-9c43-810e215a68ed" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" exitCode=0 Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811459 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811503 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"07d715fc080ad12a61c95f40dabacc8440c0cb90c7a33ab67e7813105918c946"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811522 4740 scope.go:117] "RemoveContainer" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811674 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892334 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892402 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892443 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892469 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.896501 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp" (OuterVolumeSpecName: "kube-api-access-cllbp") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "kube-api-access-cllbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.916786 4740 scope.go:117] "RemoveContainer" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.964536 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.965843 4740 scope.go:117] "RemoveContainer" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: E0216 13:12:25.966480 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": container with ID starting with 3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3 not found: ID does not exist" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.966584 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} err="failed to get container status \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": rpc error: code = NotFound desc = could not find container \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": container with ID starting with 3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3 not found: ID does not exist" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.966673 4740 scope.go:117] "RemoveContainer" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: E0216 13:12:25.967085 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": container with ID starting with 7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa not found: ID does not exist" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.967162 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa"} err="failed to get container status \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": rpc error: code = NotFound desc = could not find container \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": container with ID starting with 7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa not found: ID does not exist" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.972017 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.974748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.976936 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.988732 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994365 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994392 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994404 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000245 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config" (OuterVolumeSpecName: "config") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000278 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.012663 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.013048 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" containerID="cri-o://ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.013511 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" containerID="cri-o://894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096526 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096774 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096783 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.195022 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.195304 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.228582 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.248300 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.340195 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.388879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.389152 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520244 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520288 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520422 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.525172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts" (OuterVolumeSpecName: "scripts") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.526688 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj" (OuterVolumeSpecName: "kube-api-access-fbsrj") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "kube-api-access-fbsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.549924 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data" (OuterVolumeSpecName: "config-data") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.557197 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.578786 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622276 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622302 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622313 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622325 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723514 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723531 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.724678 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs" (OuterVolumeSpecName: "logs") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.740975 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9" (OuterVolumeSpecName: "kube-api-access-7ptm9") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "kube-api-access-7ptm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.759148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data" (OuterVolumeSpecName: "config-data") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.765941 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.782903 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerDied","Data":"53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820435 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820513 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824604 4740 generic.go:334] "Generic (PLEG): container finished" podID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" exitCode=0 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824646 4740 generic.go:334] "Generic (PLEG): container finished" podID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" exitCode=143 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824900 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" containerID="cri-o://4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825255 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825372 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"0641bfa13271f6c66fab9d171fca52a4133710b322c360f7da95c74871437730"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825451 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825625 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" containerID="cri-o://1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.826042 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" containerID="cri-o://0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827060 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827610 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827638 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827653 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827667 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.860821 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861187 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861201 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861216 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861222 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861230 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861235 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861244 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="init" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861249 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="init" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861266 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861272 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861295 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861300 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861446 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861476 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861486 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861494 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.862111 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.866715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.872681 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.879534 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.888640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.914886 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.936754 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.937614 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.937749 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} err="failed to get container status \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.937904 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.938382 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938413 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} err="failed to get container status \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938437 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938622 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} err="failed to get container status \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938652 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938874 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} err="failed to get container status \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.960927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.962761 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.964651 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.964993 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.970020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031509 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134579 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.135038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.141095 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.141713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.170042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.183230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236576 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236644 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.237446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.242217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.243698 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.277228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.279678 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.344115 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.364556 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" path="/var/lib/kubelet/pods/b0575bfd-41e7-4099-8f6e-ccece45c8478/volumes" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.365552 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecd834c-f149-401b-9c43-810e215a68ed" path="/var/lib/kubelet/pods/fecd834c-f149-401b-9c43-810e215a68ed/volumes" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.727968 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:27 crc kubenswrapper[4740]: W0216 13:12:27.730446 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4465f42a_9c2a_4aa7_9e45_fa28f78cddd7.slice/crio-5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5 WatchSource:0}: Error finding container 5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5: Status 404 returned error can't find the container with id 5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5 Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.836212 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7","Type":"ContainerStarted","Data":"5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5"} Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.840223 4740 generic.go:334] "Generic (PLEG): container finished" podID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" exitCode=143 Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.840377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.847616 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.859367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7","Type":"ContainerStarted","Data":"bcd2b379d359f09a2dbb968f30fb1ebd6a91973a94b139c6fd50973f12c6d387"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.860495 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863097 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"ec2fb41ee7bc0398ae3bc21b2bd16713c705f380c7a143d9071f0092702463d2"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.882075 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.882057374 podStartE2EDuration="2.882057374s" podCreationTimestamp="2026-02-16 13:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:28.8765205 +0000 UTC m=+1176.252869231" watchObservedRunningTime="2026-02-16 13:12:28.882057374 +0000 UTC m=+1176.258406095" Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.899355 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.899336227 podStartE2EDuration="2.899336227s" podCreationTimestamp="2026-02-16 13:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:28.893744731 +0000 UTC m=+1176.270093452" watchObservedRunningTime="2026-02-16 13:12:28.899336227 +0000 UTC m=+1176.275684938" Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.982234 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.985022 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.986566 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.986616 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.577096 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744543 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.750551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt" (OuterVolumeSpecName: "kube-api-access-gg4qt") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "kube-api-access-gg4qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:31 crc kubenswrapper[4740]: E0216 13:12:31.770979 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle podName:bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f nodeName:}" failed. No retries permitted until 2026-02-16 13:12:32.27094694 +0000 UTC m=+1179.647295661 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f") : error deleting /var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volume-subpaths: remove /var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volume-subpaths: no such file or directory Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.774308 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data" (OuterVolumeSpecName: "config-data") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.846765 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.846795 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891215 4740 generic.go:334] "Generic (PLEG): container finished" podID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" exitCode=0 Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerDied","Data":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891271 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerDied","Data":"049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc"} Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891315 4740 scope.go:117] "RemoveContainer" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.914043 4740 scope.go:117] "RemoveContainer" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: E0216 13:12:31.914410 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": container with ID starting with 1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08 not found: ID does not exist" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.914464 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} err="failed to get container status \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": rpc error: code = NotFound desc = could not find container \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": container with ID starting with 1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08 not found: ID does not exist" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.218194 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.346745 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.346802 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.357026 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.391133 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.459023 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.529221 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.546513 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.557928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.558353 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.558370 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.558546 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.559197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.571434 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.573273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.666184 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764232 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764308 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764733 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.765301 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs" (OuterVolumeSpecName: "logs") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.767702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx" (OuterVolumeSpecName: "kube-api-access-f8nmx") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "kube-api-access-f8nmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.768048 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.768448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.785631 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.812199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.817172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data" (OuterVolumeSpecName: "config-data") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866414 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866453 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866470 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866483 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904509 4740 generic.go:334] "Generic (PLEG): container finished" podID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" exitCode=0 Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904570 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904606 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904645 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103"} Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904665 4740 scope.go:117] "RemoveContainer" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.938963 4740 scope.go:117] "RemoveContainer" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.949657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.959180 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.962531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.970976 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.971379 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971398 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.971413 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971419 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971604 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971633 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.972597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.977758 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.986714 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.992089 4740 scope.go:117] "RemoveContainer" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.993635 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": container with ID starting with 0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7 not found: ID does not exist" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.993684 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} err="failed to get container status \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": rpc error: code = NotFound desc = could not find container \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": container with ID starting with 0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7 not found: ID does not exist" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.993715 4740 scope.go:117] "RemoveContainer" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.996136 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": container with ID starting with 4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80 not found: ID does not exist" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.996176 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} err="failed to get container status \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": rpc error: code = NotFound desc = could not find container \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": container with ID starting with 4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80 not found: ID does not exist" Feb 16 13:12:33 crc kubenswrapper[4740]: E0216 13:12:33.034533 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620a5f5a_99ec_4801_ba17_d9a5fa7ea7ac.slice\": RecentStats: unable to find data in memory cache]" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069278 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.170638 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.176514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.176574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.187316 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.294288 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" path="/var/lib/kubelet/pods/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac/volumes" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.294897 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" path="/var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volumes" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.300773 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.535044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.737960 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:33 crc kubenswrapper[4740]: W0216 13:12:33.742103 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63770e5c_58d7_48e9_b7dc_b0ed093c5a01.slice/crio-c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c WatchSource:0}: Error finding container c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c: Status 404 returned error can't find the container with id c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.916430 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.918803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerStarted","Data":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.918857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerStarted","Data":"77398076beb6a4d90d4fcff474e340ac86b26fe45053c777aa70824a5ad08261"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.946543 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.946519187 podStartE2EDuration="1.946519187s" podCreationTimestamp="2026-02-16 13:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:33.937996379 +0000 UTC m=+1181.314345100" watchObservedRunningTime="2026-02-16 13:12:33.946519187 +0000 UTC m=+1181.322867908" Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.929953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.930221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.951414 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.951359094 podStartE2EDuration="2.951359094s" podCreationTimestamp="2026-02-16 13:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:34.946492211 +0000 UTC m=+1182.322840962" watchObservedRunningTime="2026-02-16 13:12:34.951359094 +0000 UTC m=+1182.327707835" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.345548 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.347627 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.963582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:12:38 crc kubenswrapper[4740]: I0216 13:12:38.368145 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:38 crc kubenswrapper[4740]: I0216 13:12:38.368561 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:42 crc kubenswrapper[4740]: I0216 13:12:42.963897 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:12:42 crc kubenswrapper[4740]: I0216 13:12:42.991219 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.051391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.301235 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.301571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:44 crc kubenswrapper[4740]: I0216 13:12:44.384111 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:44 crc kubenswrapper[4740]: I0216 13:12:44.384174 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:45 crc kubenswrapper[4740]: I0216 13:12:45.601901 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.355431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.361264 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.379607 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:12:48 crc kubenswrapper[4740]: I0216 13:12:48.072110 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.086405 4740 generic.go:334] "Generic (PLEG): container finished" podID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerID="23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" exitCode=137 Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.087679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerDied","Data":"23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde"} Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.249787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400125 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400232 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400389 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.406541 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn" (OuterVolumeSpecName: "kube-api-access-mtlkn") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "kube-api-access-mtlkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.429634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data" (OuterVolumeSpecName: "config-data") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.431895 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502212 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502246 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502262 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerDied","Data":"3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2"} Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096960 4740 scope.go:117] "RemoveContainer" containerID="23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096975 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.126358 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.133589 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.155561 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: E0216 13:12:51.156044 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.156067 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.156328 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.157015 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.160077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.160319 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.164443 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.167165 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.291722 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" path="/var/lib/kubelet/pods/013bed9b-6a31-4094-bb95-addbf3f4bd01/volumes" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.313942 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.313996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314285 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314486 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415677 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415708 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415730 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.422270 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.422329 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.423703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.424493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.434554 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.484732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.950902 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:52 crc kubenswrapper[4740]: I0216 13:12:52.108464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"94da2ded-002e-4aa6-9828-404bee84c146","Type":"ContainerStarted","Data":"8df082995828aed6a7175bebbb77e69bc7047b9b6e0fd4bdf78dce6766443465"} Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.118243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"94da2ded-002e-4aa6-9828-404bee84c146","Type":"ContainerStarted","Data":"2c509854892016222cad13899eb43eee1142843f607470d93ce9615bca7fc20a"} Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.140679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.14065428 podStartE2EDuration="2.14065428s" podCreationTimestamp="2026-02-16 13:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:53.131358108 +0000 UTC m=+1200.507706849" watchObservedRunningTime="2026-02-16 13:12:53.14065428 +0000 UTC m=+1200.517002991" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309224 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309580 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309829 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309848 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.312698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.312769 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.508895 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.510821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.536394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607525 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709595 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709794 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709951 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.710018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711276 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711429 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.730500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.867865 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:54 crc kubenswrapper[4740]: I0216 13:12:54.347771 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152095 4740 generic.go:334] "Generic (PLEG): container finished" podID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" exitCode=0 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152204 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831"} Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152533 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerStarted","Data":"36059a67ae20b43daa15e0427481604179329ee78b6747e45aa5695fe8ffaa0e"} Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.476624 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478183 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" containerID="cri-o://b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478175 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" containerID="cri-o://705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478361 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" containerID="cri-o://df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.477290 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" containerID="cri-o://0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.677711 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.162497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerStarted","Data":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.162852 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165459 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165493 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" exitCode=2 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165501 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165509 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165677 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" containerID="cri-o://e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" gracePeriod=30 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166035 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166083 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" containerID="cri-o://b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" gracePeriod=30 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.186777 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" podStartSLOduration=3.186752685 podStartE2EDuration="3.186752685s" podCreationTimestamp="2026-02-16 13:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:56.178789405 +0000 UTC m=+1203.555138146" watchObservedRunningTime="2026-02-16 13:12:56.186752685 +0000 UTC m=+1203.563101406" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.290132 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360408 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360626 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360656 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360908 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360953 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.363429 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.365495 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.366790 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts" (OuterVolumeSpecName: "scripts") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.368966 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f" (OuterVolumeSpecName: "kube-api-access-ck59f") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "kube-api-access-ck59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.399402 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.440865 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463064 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463097 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463108 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463118 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463130 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463138 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.473787 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.485023 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.488248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data" (OuterVolumeSpecName: "config-data") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.564700 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.564749 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593"} Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177639 4740 scope.go:117] "RemoveContainer" containerID="b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.179700 4740 generic.go:334] "Generic (PLEG): container finished" podID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" exitCode=143 Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.179781 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.212949 4740 scope.go:117] "RemoveContainer" containerID="705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.217107 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.229607 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.246204 4740 scope.go:117] "RemoveContainer" containerID="df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256125 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256615 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256636 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256665 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256676 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256689 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256696 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256711 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256720 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256942 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256971 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256987 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.257004 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.259337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261903 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.274448 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.290824 4740 scope.go:117] "RemoveContainer" containerID="0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.304511 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" path="/var/lib/kubelet/pods/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c/volumes" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.323176 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.323863 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-qqsmc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-qqsmc log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380491 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380677 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380716 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380902 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.482848 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.485079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.487555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.487729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.488345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.496804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.497624 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.498213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.191072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.205541 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299684 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299924 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299953 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.301218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.301691 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.305091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.305124 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306030 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data" (OuterVolumeSpecName: "config-data") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306063 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts" (OuterVolumeSpecName: "scripts") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc" (OuterVolumeSpecName: "kube-api-access-qqsmc") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "kube-api-access-qqsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.327567 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402868 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402910 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402924 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402943 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402956 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402970 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402984 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402996 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.198705 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.252241 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.263706 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.319278 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" path="/var/lib/kubelet/pods/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9/volumes" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.335798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.338927 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.342940 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.343519 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.344399 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.350544 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.420952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421273 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421324 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522716 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522761 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.523083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.523929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.527635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.528221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.528459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.529084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.529619 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.560733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.763266 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.767244 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827857 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.828070 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.829093 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs" (OuterVolumeSpecName: "logs") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.833730 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g" (OuterVolumeSpecName: "kube-api-access-nvp2g") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "kube-api-access-nvp2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.860032 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.870052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data" (OuterVolumeSpecName: "config-data") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930649 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930691 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930706 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930718 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210224 4740 generic.go:334] "Generic (PLEG): container finished" podID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" exitCode=0 Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c"} Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210585 4740 scope.go:117] "RemoveContainer" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210304 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.241061 4740 scope.go:117] "RemoveContainer" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.251960 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.270131 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.270736 4740 scope.go:117] "RemoveContainer" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.271230 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": container with ID starting with b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686 not found: ID does not exist" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.271267 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} err="failed to get container status \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": rpc error: code = NotFound desc = could not find container \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": container with ID starting with b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686 not found: ID does not exist" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.271292 4740 scope.go:117] "RemoveContainer" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.272513 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": container with ID starting with e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b not found: ID does not exist" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.272552 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} err="failed to get container status \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": rpc error: code = NotFound desc = could not find container \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": container with ID starting with e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b not found: ID does not exist" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.284368 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.285082 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285178 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.285282 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285488 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285791 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285929 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.287242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292372 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.296907 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.320350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: W0216 13:13:00.325953 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcfe5822_8cae_409c_8224_b1ce2c452e02.slice/crio-4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d WatchSource:0}: Error finding container 4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d: Status 404 returned error can't find the container with id 4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.338730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.338909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441937 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.442396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.446847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.448558 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.449187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.449783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.466560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.606917 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.059107 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:01 crc kubenswrapper[4740]: W0216 13:13:01.061004 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74e5f1b3_dadf_447d_b4c7_6c7274acb380.slice/crio-79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa WatchSource:0}: Error finding container 79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa: Status 404 returned error can't find the container with id 79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.222747 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"5a681abd671eee6caab206fba914b067565558873fcb6b5018e014d280fd8723"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.222803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.225442 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.225497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.295933 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" path="/var/lib/kubelet/pods/63770e5c-58d7-48e9-b7dc-b0ed093c5a01/volumes" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.485384 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.509148 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.239099 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.241731 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"1305e976ed730967af7a5cc045b686f425f616438f8268e06c449fac060878c1"} Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.272287 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.272268754 podStartE2EDuration="2.272268754s" podCreationTimestamp="2026-02-16 13:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:02.26670794 +0000 UTC m=+1209.643056681" watchObservedRunningTime="2026-02-16 13:13:02.272268754 +0000 UTC m=+1209.648617475" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.277350 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.616596 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.618896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.621502 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.622578 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.629364 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709898 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709917 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.816788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.816790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.831440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.833349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.942506 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.254911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"033eacf45cba8509b3bba630b0a035fdf750f10e02b2a3370f3aceb052ea3d1b"} Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.408227 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.872981 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.950312 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.950584 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" containerID="cri-o://8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" gracePeriod=10 Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.300508 4740 generic.go:334] "Generic (PLEG): container finished" podID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerID="8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" exitCode=0 Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.300782 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.309158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerStarted","Data":"88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.309206 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerStarted","Data":"7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.324343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"804de6233210445ee2ac23f44d8c776698377d2f607e29440a0c4e15a1e65b5f"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.324861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.333408 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l9964" podStartSLOduration=2.333387395 podStartE2EDuration="2.333387395s" podCreationTimestamp="2026-02-16 13:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:04.330085392 +0000 UTC m=+1211.706434113" watchObservedRunningTime="2026-02-16 13:13:04.333387395 +0000 UTC m=+1211.709736106" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.373650 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.867817362 podStartE2EDuration="5.373634189s" podCreationTimestamp="2026-02-16 13:12:59 +0000 UTC" firstStartedPulling="2026-02-16 13:13:00.328362885 +0000 UTC m=+1207.704711606" lastFinishedPulling="2026-02-16 13:13:03.834179712 +0000 UTC m=+1211.210528433" observedRunningTime="2026-02-16 13:13:04.353769086 +0000 UTC m=+1211.730117817" watchObservedRunningTime="2026-02-16 13:13:04.373634189 +0000 UTC m=+1211.749982910" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.412325 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548426 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548593 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548708 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.576269 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75" (OuterVolumeSpecName: "kube-api-access-b4b75") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "kube-api-access-b4b75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.600078 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config" (OuterVolumeSpecName: "config") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.605416 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.617337 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.621320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.627165 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.650638 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.650931 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651017 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651114 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651200 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651277 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.342066 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.342952 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4"} Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.344344 4740 scope.go:117] "RemoveContainer" containerID="8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.368897 4740 scope.go:117] "RemoveContainer" containerID="427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.380379 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.388239 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:07 crc kubenswrapper[4740]: I0216 13:13:07.292380 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" path="/var/lib/kubelet/pods/56c1f9b3-2d24-4a15-a9ed-1e580d07368d/volumes" Feb 16 13:13:09 crc kubenswrapper[4740]: I0216 13:13:09.386395 4740 generic.go:334] "Generic (PLEG): container finished" podID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerID="88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29" exitCode=0 Feb 16 13:13:09 crc kubenswrapper[4740]: I0216 13:13:09.386520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerDied","Data":"88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29"} Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.608072 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.608408 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.798551 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.882459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts" (OuterVolumeSpecName: "scripts") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.884216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27" (OuterVolumeSpecName: "kube-api-access-jbh27") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "kube-api-access-jbh27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.905260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data" (OuterVolumeSpecName: "config-data") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.911031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.981977 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982036 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982051 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982065 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406756 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerDied","Data":"7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce"} Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406806 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406937 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.598319 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.599078 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" containerID="cri-o://f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.599014 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" containerID="cri-o://b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.608184 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.608366 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.619664 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.619920 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" containerID="cri-o://4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634148 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634379 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" containerID="cri-o://02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634474 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" containerID="cri-o://46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" gracePeriod=30 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.448845 4740 generic.go:334] "Generic (PLEG): container finished" podID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" exitCode=143 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.449211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.451929 4740 generic.go:334] "Generic (PLEG): container finished" podID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" exitCode=143 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.451962 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.966259 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.967686 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.969450 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.969506 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:14 crc kubenswrapper[4740]: I0216 13:13:14.781225 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:39742->10.217.0.198:8775: read: connection reset by peer" Feb 16 13:13:14 crc kubenswrapper[4740]: I0216 13:13:14.781290 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:39740->10.217.0.198:8775: read: connection reset by peer" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.248302 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.378533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379053 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379090 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.381950 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs" (OuterVolumeSpecName: "logs") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.392084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj" (OuterVolumeSpecName: "kube-api-access-jfmmj") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "kube-api-access-jfmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.409778 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.414905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data" (OuterVolumeSpecName: "config-data") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.450782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481566 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481604 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481614 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481624 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481633 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493134 4740 generic.go:334] "Generic (PLEG): container finished" podID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" exitCode=0 Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493178 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"ec2fb41ee7bc0398ae3bc21b2bd16713c705f380c7a143d9071f0092702463d2"} Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493265 4740 scope.go:117] "RemoveContainer" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.526782 4740 scope.go:117] "RemoveContainer" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.538652 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.545928 4740 scope.go:117] "RemoveContainer" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.546271 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": container with ID starting with 46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece not found: ID does not exist" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.546301 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} err="failed to get container status \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": rpc error: code = NotFound desc = could not find container \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": container with ID starting with 46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece not found: ID does not exist" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.546323 4740 scope.go:117] "RemoveContainer" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.546514 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": container with ID starting with 02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04 not found: ID does not exist" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.550490 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} err="failed to get container status \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": rpc error: code = NotFound desc = could not find container \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": container with ID starting with 02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04 not found: ID does not exist" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.581315 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.593728 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594219 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594236 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594254 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594261 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594283 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594289 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594304 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594310 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594323 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="init" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594329 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="init" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594506 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594518 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594528 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594540 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.597366 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.600300 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.600352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.602034 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685622 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685699 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.686116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.792843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.795100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.795489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.806578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.914636 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:16 crc kubenswrapper[4740]: I0216 13:13:16.397699 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:16 crc kubenswrapper[4740]: W0216 13:13:16.404516 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722ecd51_0827_457b_8d5c_246a1a57e24a.slice/crio-2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8 WatchSource:0}: Error finding container 2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8: Status 404 returned error can't find the container with id 2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8 Feb 16 13:13:16 crc kubenswrapper[4740]: I0216 13:13:16.509862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.216946 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.294915 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" path="/var/lib/kubelet/pods/a5aba68d-a690-4494-84bd-ccf1ef18592b/volumes" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315585 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315680 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.332058 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl" (OuterVolumeSpecName: "kube-api-access-xbbdl") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "kube-api-access-xbbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.349904 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data" (OuterVolumeSpecName: "config-data") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.355415 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418431 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418469 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418484 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.429661 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.519940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520308 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520354 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520380 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520413 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521162 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs" (OuterVolumeSpecName: "logs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521500 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521662 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"2395b9e17cec64c8014d0a7c6d71db13ce15b31ea3570efd3f817ea07eceb122"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"bc12862590c5214eec7e789ebe6acd368a66e5e159fc75baddb6b2b853f21d49"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.525983 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2" (OuterVolumeSpecName: "kube-api-access-kmwj2") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "kube-api-access-kmwj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531662 4740 generic.go:334] "Generic (PLEG): container finished" podID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" exitCode=0 Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531804 4740 scope.go:117] "RemoveContainer" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531933 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542718 4740 generic.go:334] "Generic (PLEG): container finished" podID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" exitCode=0 Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542762 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerDied","Data":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542786 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerDied","Data":"77398076beb6a4d90d4fcff474e340ac86b26fe45053c777aa70824a5ad08261"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542850 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.550478 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.551277 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.551252122 podStartE2EDuration="2.551252122s" podCreationTimestamp="2026-02-16 13:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:17.547714212 +0000 UTC m=+1224.924062943" watchObservedRunningTime="2026-02-16 13:13:17.551252122 +0000 UTC m=+1224.927600843" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.556905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data" (OuterVolumeSpecName: "config-data") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.570788 4740 scope.go:117] "RemoveContainer" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.589580 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.600964 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.601862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.609759 4740 scope.go:117] "RemoveContainer" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.610297 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": container with ID starting with f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7 not found: ID does not exist" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610333 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} err="failed to get container status \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": rpc error: code = NotFound desc = could not find container \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": container with ID starting with f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7 not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610358 4740 scope.go:117] "RemoveContainer" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.610869 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": container with ID starting with b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b not found: ID does not exist" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610893 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} err="failed to get container status \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": rpc error: code = NotFound desc = could not find container \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": container with ID starting with b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610909 4740 scope.go:117] "RemoveContainer" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.614998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617223 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617743 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617767 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617781 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617789 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617838 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617847 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618077 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618101 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618115 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618865 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.625009 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.625602 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626776 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626793 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626802 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626851 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626859 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.659659 4740 scope.go:117] "RemoveContainer" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.660649 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": container with ID starting with 4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff not found: ID does not exist" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.660678 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} err="failed to get container status \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": rpc error: code = NotFound desc = could not find container \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": container with ID starting with 4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.729925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.729998 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.730139 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834392 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.839352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.840083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.856685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.940403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.061617 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.076506 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.097863 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.099356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105483 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105579 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105491 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.140512 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.245671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.246730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.246862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349302 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349350 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.350458 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.354273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.355081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.357345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.357457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.371614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.442151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.510502 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: W0216 13:13:18.512329 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ba9a19_9826_4c43_9907_8cd8f1a4272a.slice/crio-d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b WatchSource:0}: Error finding container d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b: Status 404 returned error can't find the container with id d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.559050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3ba9a19-9826-4c43-9907-8cd8f1a4272a","Type":"ContainerStarted","Data":"d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b"} Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.892199 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.296232 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" path="/var/lib/kubelet/pods/74e5f1b3-dadf-447d-b4c7-6c7274acb380/volumes" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.297631 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" path="/var/lib/kubelet/pods/e8eb17c9-d042-4220-bc24-e56054e5be4d/volumes" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"64b6631e907eadec0d22847a5acabac6d6e1743026eed159a7ef3d71abf9775e"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"b6b85e2869d947a24d8d9dac79182b7517ad5b50ad4514bd0fcc077455e12c61"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572984 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"765e1dd53679a6c9a1605ed5432907100f51c8fa3b50d221701b8b937707aee7"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.575593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3ba9a19-9826-4c43-9907-8cd8f1a4272a","Type":"ContainerStarted","Data":"eb1d3be1e4ce5d680e2e131b4564ff75d7fbb999f8ad50b5291848ac4fafff05"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.602888 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.6028709349999999 podStartE2EDuration="1.602870935s" podCreationTimestamp="2026-02-16 13:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:19.595309677 +0000 UTC m=+1226.971658398" watchObservedRunningTime="2026-02-16 13:13:19.602870935 +0000 UTC m=+1226.979219646" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.620086 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.620071685 podStartE2EDuration="2.620071685s" podCreationTimestamp="2026-02-16 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:19.617250346 +0000 UTC m=+1226.993599067" watchObservedRunningTime="2026-02-16 13:13:19.620071685 +0000 UTC m=+1226.996420406" Feb 16 13:13:20 crc kubenswrapper[4740]: I0216 13:13:20.915723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:13:20 crc kubenswrapper[4740]: I0216 13:13:20.916076 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:13:22 crc kubenswrapper[4740]: I0216 13:13:22.941181 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:13:25 crc kubenswrapper[4740]: I0216 13:13:25.915365 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:13:25 crc kubenswrapper[4740]: I0216 13:13:25.915879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:13:26 crc kubenswrapper[4740]: I0216 13:13:26.928073 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722ecd51-0827-457b-8d5c-246a1a57e24a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:26 crc kubenswrapper[4740]: I0216 13:13:26.928479 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722ecd51-0827-457b-8d5c-246a1a57e24a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:27 crc kubenswrapper[4740]: I0216 13:13:27.940634 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:13:27 crc kubenswrapper[4740]: I0216 13:13:27.966012 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.443492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.443532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.711175 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.496077 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ee2c81-2a61-476c-9731-b94363864633" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.496209 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ee2c81-2a61-476c-9731-b94363864633" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.770684 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.921461 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.926135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.930896 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:13:36 crc kubenswrapper[4740]: I0216 13:13:36.746890 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.449553 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450033 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450349 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450384 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.463297 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.463464 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:13:45 crc kubenswrapper[4740]: I0216 13:13:45.574910 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:13:45 crc kubenswrapper[4740]: I0216 13:13:45.575026 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:13:46 crc kubenswrapper[4740]: I0216 13:13:46.535844 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:47 crc kubenswrapper[4740]: I0216 13:13:47.395203 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:50 crc kubenswrapper[4740]: I0216 13:13:50.743674 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" containerID="cri-o://80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" gracePeriod=604796 Feb 16 13:13:51 crc kubenswrapper[4740]: I0216 13:13:51.939996 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" containerID="cri-o://3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" gracePeriod=604796 Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.364474 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477324 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477661 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477829 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477922 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.479414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.480235 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.480416 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.485170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt" (OuterVolumeSpecName: "kube-api-access-xtjzt") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "kube-api-access-xtjzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.486939 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info" (OuterVolumeSpecName: "pod-info") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.527324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.527775 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.528282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data" (OuterVolumeSpecName: "config-data") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.530078 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.563013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf" (OuterVolumeSpecName: "server-conf") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583865 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583893 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583902 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583911 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583922 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583930 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583952 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583961 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583969 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583977 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.617535 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.620237 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.686167 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.686214 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935078 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" exitCode=0 Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"934ceceace7365e9c0090e9a012126311d06e3cf25d1f4641361df1885a08c73"} Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935170 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935197 4740 scope.go:117] "RemoveContainer" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.971027 4740 scope.go:117] "RemoveContainer" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.994726 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.006308 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035551 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.035957 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035972 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.035986 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="setup-container" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035993 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="setup-container" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.036165 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.037080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.042297 4740 scope.go:117] "RemoveContainer" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046405 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.046630 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": container with ID starting with 80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088 not found: ID does not exist" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046661 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} err="failed to get container status \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": rpc error: code = NotFound desc = could not find container \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": container with ID starting with 80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088 not found: ID does not exist" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046684 4740 scope.go:117] "RemoveContainer" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046886 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046948 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047000 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c72m7" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047074 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047110 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.047183 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": container with ID starting with 63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133 not found: ID does not exist" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047205 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} err="failed to get container status \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": rpc error: code = NotFound desc = could not find container \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": container with ID starting with 63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133 not found: ID does not exist" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047530 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.060265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.196603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197093 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197141 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197267 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197302 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197348 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299363 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299500 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299967 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.300371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.301547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304616 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304959 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.306313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.306701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.307179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.313365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.319457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.347343 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.469970 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.510498 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.604663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605373 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605448 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605573 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605925 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605999 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606378 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606566 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.607184 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.607211 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.608493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.610537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff" (OuterVolumeSpecName: "kube-api-access-8j2ff") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "kube-api-access-8j2ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.610578 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.611210 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.612052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.615756 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.647403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data" (OuterVolumeSpecName: "config-data") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.672988 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709092 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709122 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709132 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709143 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709155 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709166 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709195 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709207 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.778303 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.784384 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.811181 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.811593 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944843 4740 generic.go:334] "Generic (PLEG): container finished" podID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" exitCode=0 Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944982 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"57a77e39696732ba0c2e89d52e10f74cd6c56edebaba2ddd54807982f361b511"} Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944978 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.945026 4740 scope.go:117] "RemoveContainer" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.991765 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.992951 4740 scope.go:117] "RemoveContainer" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.006920 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035313 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.035726 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035741 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.035770 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="setup-container" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035776 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="setup-container" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035949 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.037122 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.040516 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042166 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042313 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042315 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042573 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x99bs" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042954 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.044775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.069389 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078278 4740 scope.go:117] "RemoveContainer" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.078841 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": container with ID starting with 3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1 not found: ID does not exist" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078874 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} err="failed to get container status \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": rpc error: code = NotFound desc = could not find container \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": container with ID starting with 3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1 not found: ID does not exist" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078897 4740 scope.go:117] "RemoveContainer" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.079149 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": container with ID starting with ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109 not found: ID does not exist" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.079171 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} err="failed to get container status \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": rpc error: code = NotFound desc = could not find container \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": container with ID starting with ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109 not found: ID does not exist" Feb 16 13:13:59 crc kubenswrapper[4740]: W0216 13:13:59.101937 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad16000_fb9f_4231_91fe_239907bba675.slice/crio-a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9 WatchSource:0}: Error finding container a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9: Status 404 returned error can't find the container with id a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9 Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119567 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119617 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221410 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221469 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221555 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221867 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222128 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222155 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222351 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.223047 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.224236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.224260 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226272 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226852 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.228020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.240437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.272574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.309446 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" path="/var/lib/kubelet/pods/67441c1a-f0ea-4873-bfe7-d1b25caa58a2/volumes" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.310242 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" path="/var/lib/kubelet/pods/ba652ec6-7bab-4f13-836b-35b3c7c8325f/volumes" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.543259 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.657634 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.663299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.671328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.677888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734212 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836658 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836681 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837760 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837904 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.838481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.853782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.964087 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.028410 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.086018 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:14:00 crc kubenswrapper[4740]: W0216 13:14:00.092694 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05abd29a_2c3c_4129_9afd_859a65e1ef45.slice/crio-1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162 WatchSource:0}: Error finding container 1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162: Status 404 returned error can't find the container with id 1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162 Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.514458 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:00 crc kubenswrapper[4740]: W0216 13:14:00.539981 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59421bc1_357f_46f1_857a_57d1562762dc.slice/crio-039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec WatchSource:0}: Error finding container 039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec: Status 404 returned error can't find the container with id 039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974043 4740 generic.go:334] "Generic (PLEG): container finished" podID="59421bc1-357f-46f1-857a-57d1562762dc" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" exitCode=0 Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerStarted","Data":"039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.977594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.979050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb"} Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.991648 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerStarted","Data":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.991958 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.993671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95"} Feb 16 13:14:02 crc kubenswrapper[4740]: I0216 13:14:02.017749 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-92s29" podStartSLOduration=3.017720347 podStartE2EDuration="3.017720347s" podCreationTimestamp="2026-02-16 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:02.012137832 +0000 UTC m=+1269.388486553" watchObservedRunningTime="2026-02-16 13:14:02.017720347 +0000 UTC m=+1269.394069108" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.030904 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.101503 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.101868 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" containerID="cri-o://d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" gracePeriod=10 Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.238173 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.244451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.268314 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.280979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281125 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281407 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281440 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385236 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385335 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385432 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387758 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.403670 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.597393 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.693463 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795350 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795951 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796013 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.801313 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf" (OuterVolumeSpecName: "kube-api-access-4x6mf") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "kube-api-access-4x6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.844996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.850968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.854535 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.859192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config" (OuterVolumeSpecName: "config") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.860697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899015 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899056 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899069 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899083 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899095 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899105 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068080 4740 generic.go:334] "Generic (PLEG): container finished" podID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" exitCode=0 Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068190 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"36059a67ae20b43daa15e0427481604179329ee78b6747e45aa5695fe8ffaa0e"} Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068212 4740 scope.go:117] "RemoveContainer" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068220 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.091706 4740 scope.go:117] "RemoveContainer" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.093190 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127358 4740 scope.go:117] "RemoveContainer" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: E0216 13:14:11.127688 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": container with ID starting with d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32 not found: ID does not exist" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127803 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} err="failed to get container status \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": rpc error: code = NotFound desc = could not find container \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": container with ID starting with d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32 not found: ID does not exist" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127906 4740 scope.go:117] "RemoveContainer" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: E0216 13:14:11.128185 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": container with ID starting with 93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831 not found: ID does not exist" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.128278 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831"} err="failed to get container status \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": rpc error: code = NotFound desc = could not find container \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": container with ID starting with 93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831 not found: ID does not exist" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.133446 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.141943 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.303795 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" path="/var/lib/kubelet/pods/2d75f780-0301-46d4-aa0b-ecdf66b8bc21/volumes" Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083154 4740 generic.go:334] "Generic (PLEG): container finished" podID="dc46d93a-139d-4125-9763-1093f49419a5" containerID="e726b282a1b9e748463f67cf69afcf36e85cd50adafa2bc4fa3d417259cd436c" exitCode=0 Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerDied","Data":"e726b282a1b9e748463f67cf69afcf36e85cd50adafa2bc4fa3d417259cd436c"} Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083266 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerStarted","Data":"4338e12bb91d2ca9c1548e22b396f5f7b2f1f150520816f0c1790eae55547b95"} Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.096512 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerStarted","Data":"57120e1caa117d86690ef3fa842a613ba387873abdadc27842e4798377615c7a"} Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.097046 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.124956 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" podStartSLOduration=3.124941057 podStartE2EDuration="3.124941057s" podCreationTimestamp="2026-02-16 13:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:13.122102448 +0000 UTC m=+1280.498451169" watchObservedRunningTime="2026-02-16 13:14:13.124941057 +0000 UTC m=+1280.501289778" Feb 16 13:14:15 crc kubenswrapper[4740]: I0216 13:14:15.574733 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:14:15 crc kubenswrapper[4740]: I0216 13:14:15.575148 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.599041 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.660422 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.660752 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-92s29" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" containerID="cri-o://741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" gracePeriod=10 Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.158699 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176777 4740 generic.go:334] "Generic (PLEG): container finished" podID="59421bc1-357f-46f1-857a-57d1562762dc" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" exitCode=0 Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176834 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec"} Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176870 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176885 4740 scope.go:117] "RemoveContainer" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.200138 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.200306 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201233 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201285 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201309 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201420 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.210336 4740 scope.go:117] "RemoveContainer" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.237075 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn" (OuterVolumeSpecName: "kube-api-access-rmzzn") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "kube-api-access-rmzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.259868 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.260218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.270262 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.271355 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.294158 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config" (OuterVolumeSpecName: "config") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.298224 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304277 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304318 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304332 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304342 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304353 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304364 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304373 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363249 4740 scope.go:117] "RemoveContainer" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: E0216 13:14:21.363725 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": container with ID starting with 741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7 not found: ID does not exist" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363766 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} err="failed to get container status \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": rpc error: code = NotFound desc = could not find container \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": container with ID starting with 741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7 not found: ID does not exist" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363792 4740 scope.go:117] "RemoveContainer" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: E0216 13:14:21.364391 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": container with ID starting with cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16 not found: ID does not exist" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.364416 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16"} err="failed to get container status \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": rpc error: code = NotFound desc = could not find container \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": container with ID starting with cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16 not found: ID does not exist" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.510205 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.518405 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:23 crc kubenswrapper[4740]: I0216 13:14:23.293853 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59421bc1-357f-46f1-857a-57d1562762dc" path="/var/lib/kubelet/pods/59421bc1-357f-46f1-857a-57d1562762dc/volumes" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.340242 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341090 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341101 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341117 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341123 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341144 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341151 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341179 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341185 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341340 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341354 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.342016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.347422 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.347564 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.349115 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.351354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.355677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361749 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361778 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464245 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.470486 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.470570 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.471369 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.495736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.671804 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.196879 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:30 crc kubenswrapper[4740]: W0216 13:14:30.197105 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e403d2d_bd7d_4fa6_a2a4_e15f63d2b090.slice/crio-e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a WatchSource:0}: Error finding container e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a: Status 404 returned error can't find the container with id e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.200585 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.283935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerStarted","Data":"e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a"} Feb 16 13:14:33 crc kubenswrapper[4740]: I0216 13:14:33.313071 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ad16000-fb9f-4231-91fe-239907bba675" containerID="469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb" exitCode=0 Feb 16 13:14:33 crc kubenswrapper[4740]: I0216 13:14:33.313457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerDied","Data":"469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb"} Feb 16 13:14:34 crc kubenswrapper[4740]: I0216 13:14:34.323673 4740 generic.go:334] "Generic (PLEG): container finished" podID="05abd29a-2c3c-4129-9afd-859a65e1ef45" containerID="7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95" exitCode=0 Feb 16 13:14:34 crc kubenswrapper[4740]: I0216 13:14:34.323773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerDied","Data":"7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.367214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"29ed1f86f1f0b70eb87583ecf27b796826750aeb2029ca351d0659f7b05a4282"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.368154 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.369136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerStarted","Data":"1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.372746 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"091a01464e132b090da2e3d7e1032571cac74ddefc7f9027a8c4417b4268942c"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.373160 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.423901 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.423880028 podStartE2EDuration="42.423880028s" podCreationTimestamp="2026-02-16 13:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:39.401317599 +0000 UTC m=+1306.777666330" watchObservedRunningTime="2026-02-16 13:14:39.423880028 +0000 UTC m=+1306.800228769" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.436333 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" podStartSLOduration=2.421657157 podStartE2EDuration="10.436312648s" podCreationTimestamp="2026-02-16 13:14:29 +0000 UTC" firstStartedPulling="2026-02-16 13:14:30.200347745 +0000 UTC m=+1297.576696466" lastFinishedPulling="2026-02-16 13:14:38.215003226 +0000 UTC m=+1305.591351957" observedRunningTime="2026-02-16 13:14:39.419295873 +0000 UTC m=+1306.795644604" watchObservedRunningTime="2026-02-16 13:14:39.436312648 +0000 UTC m=+1306.812661379" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.446987 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.446968282 podStartE2EDuration="41.446968282s" podCreationTimestamp="2026-02-16 13:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:39.445737714 +0000 UTC m=+1306.822086445" watchObservedRunningTime="2026-02-16 13:14:39.446968282 +0000 UTC m=+1306.823317013" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.575649 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.576304 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.576362 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.577194 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.577262 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" gracePeriod=600 Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441095 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" exitCode=0 Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441738 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441755 4740 scope.go:117] "RemoveContainer" containerID="3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" Feb 16 13:14:48 crc kubenswrapper[4740]: I0216 13:14:48.516025 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.476261 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerID="1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05" exitCode=0 Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.476420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerDied","Data":"1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05"} Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.547088 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.913447 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.998562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.998935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.999112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.999144 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.005267 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.005493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w" (OuterVolumeSpecName: "kube-api-access-g8s9w") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "kube-api-access-g8s9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.028990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory" (OuterVolumeSpecName: "inventory") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.030992 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100909 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100960 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100977 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100990 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496198 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerDied","Data":"e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a"} Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496251 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496317 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.600695 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:51 crc kubenswrapper[4740]: E0216 13:14:51.601136 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.601156 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.601394 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.602097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604466 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604695 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604721 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.605082 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.613477 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712929 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814744 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.818534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.818916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.832505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.951305 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:52 crc kubenswrapper[4740]: I0216 13:14:52.548356 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.514891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerStarted","Data":"8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143"} Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.516202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerStarted","Data":"498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4"} Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.542347 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" podStartSLOduration=2.041711662 podStartE2EDuration="2.542325352s" podCreationTimestamp="2026-02-16 13:14:51 +0000 UTC" firstStartedPulling="2026-02-16 13:14:52.547177932 +0000 UTC m=+1319.923526653" lastFinishedPulling="2026-02-16 13:14:53.047791622 +0000 UTC m=+1320.424140343" observedRunningTime="2026-02-16 13:14:53.53430127 +0000 UTC m=+1320.910650001" watchObservedRunningTime="2026-02-16 13:14:53.542325352 +0000 UTC m=+1320.918674073" Feb 16 13:14:56 crc kubenswrapper[4740]: I0216 13:14:56.547957 4740 generic.go:334] "Generic (PLEG): container finished" podID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerID="8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143" exitCode=0 Feb 16 13:14:56 crc kubenswrapper[4740]: I0216 13:14:56.548014 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerDied","Data":"8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143"} Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.009261 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058049 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058083 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.063652 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc" (OuterVolumeSpecName: "kube-api-access-86nwc") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "kube-api-access-86nwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.087984 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.088429 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory" (OuterVolumeSpecName: "inventory") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160469 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160514 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160527 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerDied","Data":"498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4"} Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571179 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571183 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633252 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:58 crc kubenswrapper[4740]: E0216 13:14:58.633661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633677 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633864 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.634587 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637016 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637350 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637536 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.639138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.661948 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671221 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772769 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.773084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.778295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.778842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.779489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.791173 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.957108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:59 crc kubenswrapper[4740]: I0216 13:14:59.473016 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:59 crc kubenswrapper[4740]: W0216 13:14:59.473648 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e96214f_a46e_451a_97d9_d448c66826f4.slice/crio-8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60 WatchSource:0}: Error finding container 8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60: Status 404 returned error can't find the container with id 8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60 Feb 16 13:14:59 crc kubenswrapper[4740]: I0216 13:14:59.580394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerStarted","Data":"8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60"} Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.148970 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.201691 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.202369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.204921 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.204938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.307216 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.308046 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.308178 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.411869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.416958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.432032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.592035 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerStarted","Data":"7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54"} Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.620497 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" podStartSLOduration=1.8843872689999999 podStartE2EDuration="2.620476543s" podCreationTimestamp="2026-02-16 13:14:58 +0000 UTC" firstStartedPulling="2026-02-16 13:14:59.476913643 +0000 UTC m=+1326.853262454" lastFinishedPulling="2026-02-16 13:15:00.213003007 +0000 UTC m=+1327.589351728" observedRunningTime="2026-02-16 13:15:00.611843732 +0000 UTC m=+1327.988192453" watchObservedRunningTime="2026-02-16 13:15:00.620476543 +0000 UTC m=+1327.996825284" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.675317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:01 crc kubenswrapper[4740]: W0216 13:15:01.120492 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab47f99f_f805_4d2e_bdf6_6da944e511a5.slice/crio-6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f WatchSource:0}: Error finding container 6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f: Status 404 returned error can't find the container with id 6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.140764 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608222 4740 generic.go:334] "Generic (PLEG): container finished" podID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerID="abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df" exitCode=0 Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerDied","Data":"abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df"} Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerStarted","Data":"6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f"} Feb 16 13:15:02 crc kubenswrapper[4740]: I0216 13:15:02.885137 4740 scope.go:117] "RemoveContainer" containerID="393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" Feb 16 13:15:02 crc kubenswrapper[4740]: I0216 13:15:02.918662 4740 scope.go:117] "RemoveContainer" containerID="538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.064942 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.169258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.169327 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.175040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj" (OuterVolumeSpecName: "kube-api-access-rxbhj") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "kube-api-access-rxbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.175743 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.271596 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.271944 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerDied","Data":"6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f"} Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637305 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637361 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.130655 4740 scope.go:117] "RemoveContainer" containerID="4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.156077 4740 scope.go:117] "RemoveContainer" containerID="b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.237735 4740 scope.go:117] "RemoveContainer" containerID="8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.327455 4740 scope.go:117] "RemoveContainer" containerID="3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.352557 4740 scope.go:117] "RemoveContainer" containerID="8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.371274 4740 scope.go:117] "RemoveContainer" containerID="d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.392973 4740 scope.go:117] "RemoveContainer" containerID="915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.412275 4740 scope.go:117] "RemoveContainer" containerID="b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" Feb 16 13:17:15 crc kubenswrapper[4740]: I0216 13:17:15.575190 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:17:15 crc kubenswrapper[4740]: I0216 13:17:15.575905 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:17:45 crc kubenswrapper[4740]: I0216 13:17:45.574696 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:17:45 crc kubenswrapper[4740]: I0216 13:17:45.575279 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.354474 4740 generic.go:334] "Generic (PLEG): container finished" podID="8e96214f-a46e-451a-97d9-d448c66826f4" containerID="7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54" exitCode=0 Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.354581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerDied","Data":"7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54"} Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.462402 4740 scope.go:117] "RemoveContainer" containerID="14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.482632 4740 scope.go:117] "RemoveContainer" containerID="f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.503759 4740 scope.go:117] "RemoveContainer" containerID="10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.525544 4740 scope.go:117] "RemoveContainer" containerID="a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.541720 4740 scope.go:117] "RemoveContainer" containerID="3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.318640 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerDied","Data":"8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60"} Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379695 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379729 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386317 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386376 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386430 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.396516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.396519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79" (OuterVolumeSpecName: "kube-api-access-m7m79") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "kube-api-access-m7m79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.424140 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory" (OuterVolumeSpecName: "inventory") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.441031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.469601 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:05 crc kubenswrapper[4740]: E0216 13:18:05.470043 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470059 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: E0216 13:18:05.470093 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470099 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470289 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470313 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488675 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488851 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488916 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488974 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.491084 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.590974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.591022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.591086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692358 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.697221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.698374 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.723453 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.840631 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:06 crc kubenswrapper[4740]: I0216 13:18:06.368273 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:06 crc kubenswrapper[4740]: I0216 13:18:06.390010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerStarted","Data":"fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57"} Feb 16 13:18:07 crc kubenswrapper[4740]: I0216 13:18:07.400192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerStarted","Data":"1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7"} Feb 16 13:18:07 crc kubenswrapper[4740]: I0216 13:18:07.421631 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" podStartSLOduration=1.8797320929999999 podStartE2EDuration="2.421609889s" podCreationTimestamp="2026-02-16 13:18:05 +0000 UTC" firstStartedPulling="2026-02-16 13:18:06.351578088 +0000 UTC m=+1513.727926819" lastFinishedPulling="2026-02-16 13:18:06.893455894 +0000 UTC m=+1514.269804615" observedRunningTime="2026-02-16 13:18:07.416105766 +0000 UTC m=+1514.792454487" watchObservedRunningTime="2026-02-16 13:18:07.421609889 +0000 UTC m=+1514.797958610" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575164 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575750 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575803 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.576419 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.576484 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" gracePeriod=600 Feb 16 13:18:15 crc kubenswrapper[4740]: E0216 13:18:15.702711 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.481779 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" exitCode=0 Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.481909 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.482001 4740 scope.go:117] "RemoveContainer" containerID="330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.482926 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:16 crc kubenswrapper[4740]: E0216 13:18:16.483515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:31 crc kubenswrapper[4740]: I0216 13:18:31.281232 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:31 crc kubenswrapper[4740]: E0216 13:18:31.282141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:44 crc kubenswrapper[4740]: I0216 13:18:44.283354 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:44 crc kubenswrapper[4740]: E0216 13:18:44.284557 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:57 crc kubenswrapper[4740]: I0216 13:18:57.281967 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:57 crc kubenswrapper[4740]: E0216 13:18:57.283020 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.048312 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.060001 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.070602 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.079884 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.088240 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.096521 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.103977 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.112486 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.122658 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.131010 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.138354 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.147009 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.296234 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" path="/var/lib/kubelet/pods/14c97501-5a5c-4e03-8e50-cf7422806c32/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.296959 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" path="/var/lib/kubelet/pods/4996abf8-6c4b-42d0-99f2-aeacf2fd5591/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.297494 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" path="/var/lib/kubelet/pods/59544dcd-0bd1-4b5f-abf6-9ab972168af0/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.298130 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" path="/var/lib/kubelet/pods/5b945754-b567-43e9-a84a-4e0ea95900e7/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.299286 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" path="/var/lib/kubelet/pods/b12e494a-5467-4264-a0e5-2596c61b4a73/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.299850 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" path="/var/lib/kubelet/pods/bb88b05d-b7b7-4a08-847c-5e8d5cc98477/volumes" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.605586 4740 scope.go:117] "RemoveContainer" containerID="a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.634332 4740 scope.go:117] "RemoveContainer" containerID="494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.668657 4740 scope.go:117] "RemoveContainer" containerID="373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.711748 4740 scope.go:117] "RemoveContainer" containerID="baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.753270 4740 scope.go:117] "RemoveContainer" containerID="90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.791532 4740 scope.go:117] "RemoveContainer" containerID="4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f" Feb 16 13:19:10 crc kubenswrapper[4740]: I0216 13:19:10.282082 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:10 crc kubenswrapper[4740]: E0216 13:19:10.282952 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.487524 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.491408 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.496797 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589228 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589633 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691238 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691881 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.710501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.855431 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:21 crc kubenswrapper[4740]: I0216 13:19:21.376127 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088059 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" exitCode=0 Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088139 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22"} Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerStarted","Data":"bbe5d05db57db865b1503b876991dc3aec22375e5378fa75aebb73d65f05e85d"} Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.281358 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:22 crc kubenswrapper[4740]: E0216 13:19:22.281684 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:24 crc kubenswrapper[4740]: I0216 13:19:24.110079 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" exitCode=0 Feb 16 13:19:24 crc kubenswrapper[4740]: I0216 13:19:24.110358 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c"} Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.039753 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.051279 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.119455 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerStarted","Data":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.137889 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-glnbq" podStartSLOduration=2.700881147 podStartE2EDuration="5.137870747s" podCreationTimestamp="2026-02-16 13:19:20 +0000 UTC" firstStartedPulling="2026-02-16 13:19:22.090037521 +0000 UTC m=+1589.466386262" lastFinishedPulling="2026-02-16 13:19:24.527027141 +0000 UTC m=+1591.903375862" observedRunningTime="2026-02-16 13:19:25.13509902 +0000 UTC m=+1592.511447741" watchObservedRunningTime="2026-02-16 13:19:25.137870747 +0000 UTC m=+1592.514219458" Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.292044 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15147587-626f-4577-b5af-b8f574f60152" path="/var/lib/kubelet/pods/15147587-626f-4577-b5af-b8f574f60152/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.035865 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.048114 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.061822 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.073318 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.081932 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.091611 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.099580 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.107325 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.114750 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.122906 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.130474 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.137471 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.291106 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5296850e-63c0-4801-bff8-bc5213555f58" path="/var/lib/kubelet/pods/5296850e-63c0-4801-bff8-bc5213555f58/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.291731 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634925bb-5381-4298-a256-447ef56a2f2a" path="/var/lib/kubelet/pods/634925bb-5381-4298-a256-447ef56a2f2a/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.292265 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" path="/var/lib/kubelet/pods/65301f64-cd42-4faf-b454-a43c7c7096a1/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.292759 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" path="/var/lib/kubelet/pods/685d1543-1ab9-435f-b2c0-2a54c104e86f/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.293936 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" path="/var/lib/kubelet/pods/9aafb0ee-2681-48a9-b1e0-2442d0a16541/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.294433 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" path="/var/lib/kubelet/pods/a14f3fd5-4d53-4336-85b1-7d636060bd0a/volumes" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.856288 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.857540 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.919028 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:31 crc kubenswrapper[4740]: I0216 13:19:31.218743 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:31 crc kubenswrapper[4740]: I0216 13:19:31.307936 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.198324 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-glnbq" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" containerID="cri-o://1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" gracePeriod=2 Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.747097 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753098 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753167 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753265 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.754315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities" (OuterVolumeSpecName: "utilities") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.760708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l" (OuterVolumeSpecName: "kube-api-access-sp49l") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "kube-api-access-sp49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.855510 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.855544 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.954218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.957271 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.045840 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.063488 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212005 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" exitCode=0 Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212110 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"bbe5d05db57db865b1503b876991dc3aec22375e5378fa75aebb73d65f05e85d"} Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212158 4740 scope.go:117] "RemoveContainer" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.231470 4740 scope.go:117] "RemoveContainer" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.247552 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.255741 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.273784 4740 scope.go:117] "RemoveContainer" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.296805 4740 scope.go:117] "RemoveContainer" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.297248 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": container with ID starting with 1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6 not found: ID does not exist" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297300 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} err="failed to get container status \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": rpc error: code = NotFound desc = could not find container \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": container with ID starting with 1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6 not found: ID does not exist" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297366 4740 scope.go:117] "RemoveContainer" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.297798 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": container with ID starting with b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c not found: ID does not exist" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297845 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c"} err="failed to get container status \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": rpc error: code = NotFound desc = could not find container \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": container with ID starting with b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c not found: ID does not exist" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297866 4740 scope.go:117] "RemoveContainer" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.298106 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": container with ID starting with c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22 not found: ID does not exist" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.298141 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22"} err="failed to get container status \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": rpc error: code = NotFound desc = could not find container \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": container with ID starting with c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22 not found: ID does not exist" Feb 16 13:19:35 crc kubenswrapper[4740]: I0216 13:19:35.298365 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" path="/var/lib/kubelet/pods/3cea0875-b3a8-4a52-84ff-d9215408294b/volumes" Feb 16 13:19:35 crc kubenswrapper[4740]: I0216 13:19:35.298971 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cafc58b-221a-4319-b03c-b2854606f194" path="/var/lib/kubelet/pods/4cafc58b-221a-4319-b03c-b2854606f194/volumes" Feb 16 13:19:37 crc kubenswrapper[4740]: I0216 13:19:37.281752 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:37 crc kubenswrapper[4740]: E0216 13:19:37.283188 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.047744 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.051133 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-content" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.051683 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-content" Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.051881 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-utilities" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.051959 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-utilities" Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.052071 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.052140 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.052422 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.054261 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.087020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186419 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.289543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.289666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.321101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.388244 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.948220 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296454 4740 generic.go:334] "Generic (PLEG): container finished" podID="805f4cce-9373-4649-8daa-e97ab900433f" containerID="6331156baa2822a146150f209c54b198ff90e84ba91ea60edd9d4639e468a3d2" exitCode=0 Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerDied","Data":"6331156baa2822a146150f209c54b198ff90e84ba91ea60edd9d4639e468a3d2"} Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerStarted","Data":"15fb8bcf5417039efc2de024645358e14ed957b9f5d08a68c15b0abf1eb6f47a"} Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.299408 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:19:45 crc kubenswrapper[4740]: I0216 13:19:45.354513 4740 generic.go:334] "Generic (PLEG): container finished" podID="805f4cce-9373-4649-8daa-e97ab900433f" containerID="4486e629945062b7cb8b99f9c66aad8c1cc72225676f5f670d4681bc91d01b42" exitCode=0 Feb 16 13:19:45 crc kubenswrapper[4740]: I0216 13:19:45.354801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerDied","Data":"4486e629945062b7cb8b99f9c66aad8c1cc72225676f5f670d4681bc91d01b42"} Feb 16 13:19:46 crc kubenswrapper[4740]: I0216 13:19:46.368471 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerStarted","Data":"92b4e33fc0c95830871b37ce7824b131631f109024b3d7bdeef190168f6c3939"} Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.373305 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:50 crc kubenswrapper[4740]: E0216 13:19:50.374048 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.389102 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.389450 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.461988 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.488875 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzh8l" podStartSLOduration=5.791046971 podStartE2EDuration="10.488858889s" podCreationTimestamp="2026-02-16 13:19:40 +0000 UTC" firstStartedPulling="2026-02-16 13:19:41.299187519 +0000 UTC m=+1608.675536240" lastFinishedPulling="2026-02-16 13:19:45.996999397 +0000 UTC m=+1613.373348158" observedRunningTime="2026-02-16 13:19:46.390233433 +0000 UTC m=+1613.766582154" watchObservedRunningTime="2026-02-16 13:19:50.488858889 +0000 UTC m=+1617.865207610" Feb 16 13:19:54 crc kubenswrapper[4740]: I0216 13:19:54.456704 4740 generic.go:334] "Generic (PLEG): container finished" podID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerID="1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7" exitCode=0 Feb 16 13:19:54 crc kubenswrapper[4740]: I0216 13:19:54.456847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerDied","Data":"1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7"} Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.848313 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903215 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.910710 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx" (OuterVolumeSpecName: "kube-api-access-j9lzx") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "kube-api-access-j9lzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.931707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.933051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory" (OuterVolumeSpecName: "inventory") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004712 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004747 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004757 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerDied","Data":"fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57"} Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480468 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480200 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.566534 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:56 crc kubenswrapper[4740]: E0216 13:19:56.567205 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.567346 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.567709 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.568613 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.571799 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.572059 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.572790 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.573143 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.578847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.717654 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.718003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.718212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.724981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.735972 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.739487 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.889595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:57 crc kubenswrapper[4740]: I0216 13:19:57.388999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:57 crc kubenswrapper[4740]: I0216 13:19:57.488330 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerStarted","Data":"cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6"} Feb 16 13:19:58 crc kubenswrapper[4740]: I0216 13:19:58.501960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerStarted","Data":"d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0"} Feb 16 13:19:58 crc kubenswrapper[4740]: I0216 13:19:58.524534 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" podStartSLOduration=2.030285704 podStartE2EDuration="2.52451749s" podCreationTimestamp="2026-02-16 13:19:56 +0000 UTC" firstStartedPulling="2026-02-16 13:19:57.393255004 +0000 UTC m=+1624.769603745" lastFinishedPulling="2026-02-16 13:19:57.88748679 +0000 UTC m=+1625.263835531" observedRunningTime="2026-02-16 13:19:58.515724554 +0000 UTC m=+1625.892073275" watchObservedRunningTime="2026-02-16 13:19:58.52451749 +0000 UTC m=+1625.900866211" Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.440223 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.514256 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.555752 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.556023 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-czkjl" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" containerID="cri-o://b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" gracePeriod=2 Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.021414 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.066625 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.082009 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.090418 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.104805 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.105159 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.105372 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.107833 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.127479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities" (OuterVolumeSpecName: "utilities") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.127600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp" (OuterVolumeSpecName: "kube-api-access-xbgkp") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "kube-api-access-xbgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.161042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207363 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207392 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207404 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.291979 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" path="/var/lib/kubelet/pods/2c41d146-de9f-4d90-bb9e-6c12fc832650/volumes" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.292638 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" path="/var/lib/kubelet/pods/f092c8c4-9a32-4093-9a5c-bc5fd05d600e/volumes" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542619 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" exitCode=0 Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542678 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542704 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d"} Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542720 4740 scope.go:117] "RemoveContainer" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542862 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.565207 4740 scope.go:117] "RemoveContainer" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.568406 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.576364 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.587738 4740 scope.go:117] "RemoveContainer" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.638826 4740 scope.go:117] "RemoveContainer" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639174 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": container with ID starting with b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b not found: ID does not exist" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639244 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} err="failed to get container status \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": rpc error: code = NotFound desc = could not find container \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": container with ID starting with b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b not found: ID does not exist" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639269 4740 scope.go:117] "RemoveContainer" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639511 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": container with ID starting with 591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767 not found: ID does not exist" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639541 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} err="failed to get container status \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": rpc error: code = NotFound desc = could not find container \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": container with ID starting with 591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767 not found: ID does not exist" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639562 4740 scope.go:117] "RemoveContainer" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639736 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": container with ID starting with a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6 not found: ID does not exist" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639766 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6"} err="failed to get container status \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": rpc error: code = NotFound desc = could not find container \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": container with ID starting with a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6 not found: ID does not exist" Feb 16 13:20:02 crc kubenswrapper[4740]: I0216 13:20:02.282068 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:02 crc kubenswrapper[4740]: E0216 13:20:02.283041 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:03 crc kubenswrapper[4740]: I0216 13:20:03.292553 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" path="/var/lib/kubelet/pods/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf/volumes" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.479802 4740 scope.go:117] "RemoveContainer" containerID="f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.553844 4740 scope.go:117] "RemoveContainer" containerID="2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.619650 4740 scope.go:117] "RemoveContainer" containerID="c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.655309 4740 scope.go:117] "RemoveContainer" containerID="a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.691192 4740 scope.go:117] "RemoveContainer" containerID="f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.729911 4740 scope.go:117] "RemoveContainer" containerID="032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.780236 4740 scope.go:117] "RemoveContainer" containerID="297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.800120 4740 scope.go:117] "RemoveContainer" containerID="f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.823888 4740 scope.go:117] "RemoveContainer" containerID="cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.842436 4740 scope.go:117] "RemoveContainer" containerID="afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf" Feb 16 13:20:10 crc kubenswrapper[4740]: I0216 13:20:10.064485 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:20:10 crc kubenswrapper[4740]: I0216 13:20:10.083684 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.044464 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.055524 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.071107 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.081883 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092400 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092787 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092804 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092835 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-utilities" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092841 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-utilities" Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092851 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-content" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092876 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-content" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.093076 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.094470 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.100422 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190292 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291514 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292790 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" path="/var/lib/kubelet/pods/2fa1d954-018c-45a1-93e6-149318cdda8c/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.293805 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" path="/var/lib/kubelet/pods/b63f4468-5c78-4dfd-a40a-302877eba3dc/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.294541 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" path="/var/lib/kubelet/pods/c1263236-13e5-4a79-b19a-96f535ae0783/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.311009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.413770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.847895 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658108 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" exitCode=0 Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d"} Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658207 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerStarted","Data":"7bb32512b48a2ef2c26686beead27117e013fd16c2ee07f289aa711fb236a4ed"} Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.282120 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:14 crc kubenswrapper[4740]: E0216 13:20:14.283262 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.680849 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" exitCode=0 Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.680896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a"} Feb 16 13:20:15 crc kubenswrapper[4740]: I0216 13:20:15.689764 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerStarted","Data":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} Feb 16 13:20:15 crc kubenswrapper[4740]: I0216 13:20:15.708089 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4gfd" podStartSLOduration=2.020969719 podStartE2EDuration="4.70807212s" podCreationTimestamp="2026-02-16 13:20:11 +0000 UTC" firstStartedPulling="2026-02-16 13:20:12.660274435 +0000 UTC m=+1640.036623156" lastFinishedPulling="2026-02-16 13:20:15.347376846 +0000 UTC m=+1642.723725557" observedRunningTime="2026-02-16 13:20:15.707494012 +0000 UTC m=+1643.083842753" watchObservedRunningTime="2026-02-16 13:20:15.70807212 +0000 UTC m=+1643.084420841" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.414093 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.414732 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.486685 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.792462 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.849709 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:23 crc kubenswrapper[4740]: I0216 13:20:23.794694 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4gfd" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" containerID="cri-o://cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" gracePeriod=2 Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.240268 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.365788 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.366190 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.366304 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.367311 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities" (OuterVolumeSpecName: "utilities") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.372278 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j" (OuterVolumeSpecName: "kube-api-access-2pl9j") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "kube-api-access-2pl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.393122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.468849 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.469644 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.469749 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813085 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" exitCode=0 Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813146 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"7bb32512b48a2ef2c26686beead27117e013fd16c2ee07f289aa711fb236a4ed"} Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813195 4740 scope.go:117] "RemoveContainer" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.837290 4740 scope.go:117] "RemoveContainer" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.864268 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.871842 4740 scope.go:117] "RemoveContainer" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.881081 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.915088 4740 scope.go:117] "RemoveContainer" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.915939 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": container with ID starting with cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff not found: ID does not exist" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.915984 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} err="failed to get container status \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": rpc error: code = NotFound desc = could not find container \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": container with ID starting with cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff not found: ID does not exist" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916013 4740 scope.go:117] "RemoveContainer" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.916789 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": container with ID starting with 82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a not found: ID does not exist" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916842 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a"} err="failed to get container status \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": rpc error: code = NotFound desc = could not find container \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": container with ID starting with 82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a not found: ID does not exist" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916869 4740 scope.go:117] "RemoveContainer" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.917167 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": container with ID starting with 9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d not found: ID does not exist" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.917204 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d"} err="failed to get container status \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": rpc error: code = NotFound desc = could not find container \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": container with ID starting with 9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d not found: ID does not exist" Feb 16 13:20:25 crc kubenswrapper[4740]: I0216 13:20:25.293061 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" path="/var/lib/kubelet/pods/f03eabf3-cb8f-4391-bafc-374ea00b3058/volumes" Feb 16 13:20:26 crc kubenswrapper[4740]: I0216 13:20:26.281838 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:26 crc kubenswrapper[4740]: E0216 13:20:26.282716 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:32 crc kubenswrapper[4740]: I0216 13:20:32.051045 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:20:32 crc kubenswrapper[4740]: I0216 13:20:32.061081 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:20:33 crc kubenswrapper[4740]: I0216 13:20:33.291763 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" path="/var/lib/kubelet/pods/6e6806e6-e7ab-40bb-a703-0f4bfe131539/volumes" Feb 16 13:20:40 crc kubenswrapper[4740]: I0216 13:20:40.282644 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:40 crc kubenswrapper[4740]: E0216 13:20:40.283559 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:55 crc kubenswrapper[4740]: I0216 13:20:55.281487 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:55 crc kubenswrapper[4740]: E0216 13:20:55.282324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.042729 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.055908 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.068971 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.075657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.083720 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.090891 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.097070 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.103820 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.110393 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.118739 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.125776 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.132354 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.306670 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" path="/var/lib/kubelet/pods/0bf48619-6b39-4215-950a-f8da809dcc11/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.307728 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" path="/var/lib/kubelet/pods/2dc528c1-14c9-4bb4-a6f8-621fc066e98a/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.308438 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" path="/var/lib/kubelet/pods/b93273db-db1d-4c4b-85ad-2d87065c42f4/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.309022 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" path="/var/lib/kubelet/pods/c28029f1-eca0-4cd5-95b3-774c21d6d0ed/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.310042 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" path="/var/lib/kubelet/pods/ce83ec9b-39d5-4bf9-b343-d3f06f886841/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.310562 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" path="/var/lib/kubelet/pods/e2ec561b-87d9-418d-9376-c48bb31d46f9/volumes" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.040435 4740 scope.go:117] "RemoveContainer" containerID="4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.070363 4740 scope.go:117] "RemoveContainer" containerID="d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.121383 4740 scope.go:117] "RemoveContainer" containerID="1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.170469 4740 scope.go:117] "RemoveContainer" containerID="a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.222570 4740 scope.go:117] "RemoveContainer" containerID="8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.247517 4740 scope.go:117] "RemoveContainer" containerID="04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.278700 4740 scope.go:117] "RemoveContainer" containerID="451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.308136 4740 scope.go:117] "RemoveContainer" containerID="20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.326894 4740 scope.go:117] "RemoveContainer" containerID="344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.344849 4740 scope.go:117] "RemoveContainer" containerID="04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569" Feb 16 13:21:08 crc kubenswrapper[4740]: I0216 13:21:08.471717 4740 generic.go:334] "Generic (PLEG): container finished" podID="3691fefa-c161-4670-bae7-ddde074e2892" containerID="d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0" exitCode=0 Feb 16 13:21:08 crc kubenswrapper[4740]: I0216 13:21:08.471899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerDied","Data":"d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0"} Feb 16 13:21:09 crc kubenswrapper[4740]: I0216 13:21:09.871046 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.018974 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.019105 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.019142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.023955 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh" (OuterVolumeSpecName: "kube-api-access-9nvhh") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "kube-api-access-9nvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.044510 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.044999 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory" (OuterVolumeSpecName: "inventory") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121048 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121084 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121095 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.281851 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.282282 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerDied","Data":"cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6"} Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523529 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523547 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.596766 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597234 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597259 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597292 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597301 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597313 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-content" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597320 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-content" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597340 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-utilities" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597348 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-utilities" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597607 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597653 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.598497 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602358 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602444 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602482 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602489 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.608641 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630138 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733186 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733413 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.737567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.740652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.756037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.912592 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:11 crc kubenswrapper[4740]: I0216 13:21:11.452771 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:11 crc kubenswrapper[4740]: I0216 13:21:11.535186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerStarted","Data":"0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb"} Feb 16 13:21:12 crc kubenswrapper[4740]: I0216 13:21:12.543897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerStarted","Data":"bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5"} Feb 16 13:21:12 crc kubenswrapper[4740]: I0216 13:21:12.566195 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" podStartSLOduration=1.8272391639999999 podStartE2EDuration="2.566173593s" podCreationTimestamp="2026-02-16 13:21:10 +0000 UTC" firstStartedPulling="2026-02-16 13:21:11.459285273 +0000 UTC m=+1698.835633994" lastFinishedPulling="2026-02-16 13:21:12.198219702 +0000 UTC m=+1699.574568423" observedRunningTime="2026-02-16 13:21:12.558873184 +0000 UTC m=+1699.935221925" watchObservedRunningTime="2026-02-16 13:21:12.566173593 +0000 UTC m=+1699.942522314" Feb 16 13:21:17 crc kubenswrapper[4740]: I0216 13:21:17.616694 4740 generic.go:334] "Generic (PLEG): container finished" podID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerID="bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5" exitCode=0 Feb 16 13:21:17 crc kubenswrapper[4740]: I0216 13:21:17.617413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerDied","Data":"bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5"} Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.016274 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106053 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106448 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106542 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.113444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz" (OuterVolumeSpecName: "kube-api-access-hsngz") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "kube-api-access-hsngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.139041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory" (OuterVolumeSpecName: "inventory") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.153792 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.208964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.209000 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.209014 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerDied","Data":"0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb"} Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635221 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635227 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.713942 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:19 crc kubenswrapper[4740]: E0216 13:21:19.714401 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.714422 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.714593 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.715282 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.717273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.718239 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.718484 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.719080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.729484 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.822838 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.822899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.823007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.826608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.834708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.838792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.031169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.584607 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.642325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerStarted","Data":"1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99"} Feb 16 13:21:21 crc kubenswrapper[4740]: I0216 13:21:21.651883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerStarted","Data":"07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e"} Feb 16 13:21:21 crc kubenswrapper[4740]: I0216 13:21:21.666969 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" podStartSLOduration=2.220147077 podStartE2EDuration="2.666950204s" podCreationTimestamp="2026-02-16 13:21:19 +0000 UTC" firstStartedPulling="2026-02-16 13:21:20.599669696 +0000 UTC m=+1707.976018417" lastFinishedPulling="2026-02-16 13:21:21.046472823 +0000 UTC m=+1708.422821544" observedRunningTime="2026-02-16 13:21:21.664757414 +0000 UTC m=+1709.041106135" watchObservedRunningTime="2026-02-16 13:21:21.666950204 +0000 UTC m=+1709.043298925" Feb 16 13:21:22 crc kubenswrapper[4740]: I0216 13:21:22.281980 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:22 crc kubenswrapper[4740]: E0216 13:21:22.282769 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.029537 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.036658 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.292442 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" path="/var/lib/kubelet/pods/2fce641e-1b76-4b99-a99d-9a0ccbf9680e/volumes" Feb 16 13:21:33 crc kubenswrapper[4740]: I0216 13:21:33.284724 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:33 crc kubenswrapper[4740]: E0216 13:21:33.285391 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:46 crc kubenswrapper[4740]: I0216 13:21:46.281886 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:46 crc kubenswrapper[4740]: E0216 13:21:46.283178 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.281135 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:57 crc kubenswrapper[4740]: E0216 13:21:57.282116 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.993980 4740 generic.go:334] "Generic (PLEG): container finished" podID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerID="07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e" exitCode=0 Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.994127 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerDied","Data":"07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e"} Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.416122 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.491932 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.492206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.492401 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.499002 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj" (OuterVolumeSpecName: "kube-api-access-6vnzj") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "kube-api-access-6vnzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.521154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.524912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory" (OuterVolumeSpecName: "inventory") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595125 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595160 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595173 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010872 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerDied","Data":"1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99"} Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010925 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010924 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.105328 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:00 crc kubenswrapper[4740]: E0216 13:22:00.106017 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106220 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106890 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109530 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109685 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.110291 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.116084 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209275 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310772 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.316506 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.319581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.332198 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.427244 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.933040 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:01 crc kubenswrapper[4740]: I0216 13:22:01.020533 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerStarted","Data":"1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56"} Feb 16 13:22:02 crc kubenswrapper[4740]: I0216 13:22:02.034013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerStarted","Data":"499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374"} Feb 16 13:22:02 crc kubenswrapper[4740]: I0216 13:22:02.071394 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" podStartSLOduration=1.656462229 podStartE2EDuration="2.071359125s" podCreationTimestamp="2026-02-16 13:22:00 +0000 UTC" firstStartedPulling="2026-02-16 13:22:00.938716286 +0000 UTC m=+1748.315065017" lastFinishedPulling="2026-02-16 13:22:01.353613182 +0000 UTC m=+1748.729961913" observedRunningTime="2026-02-16 13:22:02.064234712 +0000 UTC m=+1749.440583453" watchObservedRunningTime="2026-02-16 13:22:02.071359125 +0000 UTC m=+1749.447707886" Feb 16 13:22:05 crc kubenswrapper[4740]: I0216 13:22:05.542535 4740 scope.go:117] "RemoveContainer" containerID="7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b" Feb 16 13:22:10 crc kubenswrapper[4740]: I0216 13:22:10.282313 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:10 crc kubenswrapper[4740]: E0216 13:22:10.283100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:24 crc kubenswrapper[4740]: I0216 13:22:24.280787 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:24 crc kubenswrapper[4740]: E0216 13:22:24.281651 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.043387 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.059558 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.292404 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" path="/var/lib/kubelet/pods/9f4deadb-18ac-4d06-ba22-e391b19d38cd/volumes" Feb 16 13:22:26 crc kubenswrapper[4740]: I0216 13:22:26.028047 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:22:26 crc kubenswrapper[4740]: I0216 13:22:26.039269 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:22:27 crc kubenswrapper[4740]: I0216 13:22:27.300406 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" path="/var/lib/kubelet/pods/975c922d-b91a-4cf6-9739-0d478d19765a/volumes" Feb 16 13:22:37 crc kubenswrapper[4740]: I0216 13:22:37.281921 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:37 crc kubenswrapper[4740]: E0216 13:22:37.282749 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:46 crc kubenswrapper[4740]: I0216 13:22:46.433674 4740 generic.go:334] "Generic (PLEG): container finished" podID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerID="499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374" exitCode=0 Feb 16 13:22:46 crc kubenswrapper[4740]: I0216 13:22:46.433776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerDied","Data":"499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374"} Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.865920 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992361 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.000297 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8" (OuterVolumeSpecName: "kube-api-access-5tjc8") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "kube-api-access-5tjc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.028872 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.030845 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory" (OuterVolumeSpecName: "inventory") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094509 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094550 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094560 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerDied","Data":"1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56"} Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461507 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461524 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.632882 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:48 crc kubenswrapper[4740]: E0216 13:22:48.633431 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.633450 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.633656 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.634414 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639224 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639455 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639583 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639724 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.659991 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:48 crc kubenswrapper[4740]: E0216 13:22:48.797259 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928b9f1f_3a42_47e3_b895_756f66452ebf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928b9f1f_3a42_47e3_b895_756f66452ebf.slice/crio-1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56\": RecentStats: unable to find data in memory cache]" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812075 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812474 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812650 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916031 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.925437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.931762 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.938840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.997824 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:49 crc kubenswrapper[4740]: I0216 13:22:49.281590 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:49 crc kubenswrapper[4740]: E0216 13:22:49.282018 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:50 crc kubenswrapper[4740]: I0216 13:22:50.403118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:50 crc kubenswrapper[4740]: I0216 13:22:50.476545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerStarted","Data":"53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956"} Feb 16 13:22:51 crc kubenswrapper[4740]: I0216 13:22:51.487607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerStarted","Data":"4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f"} Feb 16 13:22:51 crc kubenswrapper[4740]: I0216 13:22:51.511632 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" podStartSLOduration=3.104791047 podStartE2EDuration="3.51161403s" podCreationTimestamp="2026-02-16 13:22:48 +0000 UTC" firstStartedPulling="2026-02-16 13:22:50.391582036 +0000 UTC m=+1797.767930757" lastFinishedPulling="2026-02-16 13:22:50.798405019 +0000 UTC m=+1798.174753740" observedRunningTime="2026-02-16 13:22:51.505342593 +0000 UTC m=+1798.881691324" watchObservedRunningTime="2026-02-16 13:22:51.51161403 +0000 UTC m=+1798.887962761" Feb 16 13:22:57 crc kubenswrapper[4740]: I0216 13:22:57.541333 4740 generic.go:334] "Generic (PLEG): container finished" podID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerID="4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f" exitCode=0 Feb 16 13:22:57 crc kubenswrapper[4740]: I0216 13:22:57.541361 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerDied","Data":"4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f"} Feb 16 13:22:58 crc kubenswrapper[4740]: I0216 13:22:58.996324 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105261 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.110559 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2" (OuterVolumeSpecName: "kube-api-access-j5xt2") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "kube-api-access-j5xt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.131414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.138009 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208509 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208569 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208594 4740 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerDied","Data":"53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956"} Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558747 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558528 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.648094 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:22:59 crc kubenswrapper[4740]: E0216 13:22:59.648711 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.648795 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.649097 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.649740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654158 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654409 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654672 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.673334 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725466 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.831223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.835650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.850370 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.968024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:00 crc kubenswrapper[4740]: I0216 13:23:00.539208 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:23:00 crc kubenswrapper[4740]: I0216 13:23:00.569758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerStarted","Data":"2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5"} Feb 16 13:23:01 crc kubenswrapper[4740]: I0216 13:23:01.583798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerStarted","Data":"ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b"} Feb 16 13:23:01 crc kubenswrapper[4740]: I0216 13:23:01.601749 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" podStartSLOduration=2.126018965 podStartE2EDuration="2.60172619s" podCreationTimestamp="2026-02-16 13:22:59 +0000 UTC" firstStartedPulling="2026-02-16 13:23:00.553293315 +0000 UTC m=+1807.929642036" lastFinishedPulling="2026-02-16 13:23:01.02900053 +0000 UTC m=+1808.405349261" observedRunningTime="2026-02-16 13:23:01.599735658 +0000 UTC m=+1808.976084379" watchObservedRunningTime="2026-02-16 13:23:01.60172619 +0000 UTC m=+1808.978074911" Feb 16 13:23:03 crc kubenswrapper[4740]: I0216 13:23:03.289806 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:23:03 crc kubenswrapper[4740]: E0216 13:23:03.290433 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:23:05 crc kubenswrapper[4740]: I0216 13:23:05.622374 4740 scope.go:117] "RemoveContainer" containerID="2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5" Feb 16 13:23:05 crc kubenswrapper[4740]: I0216 13:23:05.662215 4740 scope.go:117] "RemoveContainer" containerID="0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4" Feb 16 13:23:09 crc kubenswrapper[4740]: I0216 13:23:09.645790 4740 generic.go:334] "Generic (PLEG): container finished" podID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerID="ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b" exitCode=0 Feb 16 13:23:09 crc kubenswrapper[4740]: I0216 13:23:09.646369 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerDied","Data":"ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b"} Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.041497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.050060 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.105080 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245466 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.250581 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm" (OuterVolumeSpecName: "kube-api-access-qzxtm") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "kube-api-access-qzxtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.286424 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory" (OuterVolumeSpecName: "inventory") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.305282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.305454 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" path="/var/lib/kubelet/pods/798bf8e1-4a33-48eb-bbb3-9be8d38027de/volumes" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348495 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348544 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348556 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.666157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerDied","Data":"2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5"} Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.666219 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.667990 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.775262 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:11 crc kubenswrapper[4740]: E0216 13:23:11.776524 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.776568 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.777255 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.778116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781872 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781599 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.789242 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.861409 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.861471 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.862013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963251 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963501 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.967254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.967268 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.980713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.098341 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.641480 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.681712 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerStarted","Data":"95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633"} Feb 16 13:23:13 crc kubenswrapper[4740]: I0216 13:23:13.692151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerStarted","Data":"05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f"} Feb 16 13:23:13 crc kubenswrapper[4740]: I0216 13:23:13.716750 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" podStartSLOduration=2.158284335 podStartE2EDuration="2.716724373s" podCreationTimestamp="2026-02-16 13:23:11 +0000 UTC" firstStartedPulling="2026-02-16 13:23:12.64154865 +0000 UTC m=+1820.017897371" lastFinishedPulling="2026-02-16 13:23:13.199988688 +0000 UTC m=+1820.576337409" observedRunningTime="2026-02-16 13:23:13.713401099 +0000 UTC m=+1821.089749840" watchObservedRunningTime="2026-02-16 13:23:13.716724373 +0000 UTC m=+1821.093073094" Feb 16 13:23:18 crc kubenswrapper[4740]: I0216 13:23:18.281631 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:23:18 crc kubenswrapper[4740]: I0216 13:23:18.747585 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} Feb 16 13:23:24 crc kubenswrapper[4740]: I0216 13:23:24.965118 4740 generic.go:334] "Generic (PLEG): container finished" podID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerID="05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f" exitCode=0 Feb 16 13:23:24 crc kubenswrapper[4740]: I0216 13:23:24.965352 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerDied","Data":"05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f"} Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.410904 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478617 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478782 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478906 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.490070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67" (OuterVolumeSpecName: "kube-api-access-29d67") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "kube-api-access-29d67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.507425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.522160 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory" (OuterVolumeSpecName: "inventory") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581033 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581083 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581095 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerDied","Data":"95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633"} Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981311 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981324 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.064222 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: E0216 13:23:27.064710 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.064739 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.065017 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.065781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.071698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.071943 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072125 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072305 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072500 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072548 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072601 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.079608 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.081621 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191640 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191827 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191908 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192048 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192259 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192367 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.293871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294195 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.303679 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.304092 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.304497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.305577 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306162 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306305 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.308139 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.308739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.309783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.315168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.315207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.322120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.418211 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.964469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: W0216 13:23:27.974062 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e117ddc_9ff8_414d_859b_0a16b4846029.slice/crio-dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8 WatchSource:0}: Error finding container dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8: Status 404 returned error can't find the container with id dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8 Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.990667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerStarted","Data":"dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8"} Feb 16 13:23:29 crc kubenswrapper[4740]: I0216 13:23:29.000225 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerStarted","Data":"4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50"} Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.877442 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" podStartSLOduration=17.472736127 podStartE2EDuration="17.877423389s" podCreationTimestamp="2026-02-16 13:23:27 +0000 UTC" firstStartedPulling="2026-02-16 13:23:27.975885843 +0000 UTC m=+1835.352234564" lastFinishedPulling="2026-02-16 13:23:28.380573105 +0000 UTC m=+1835.756921826" observedRunningTime="2026-02-16 13:23:29.030142671 +0000 UTC m=+1836.406491392" watchObservedRunningTime="2026-02-16 13:23:44.877423389 +0000 UTC m=+1852.253772110" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.881348 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.885146 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.893629 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.071989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.092338 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.265665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.731392 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130244 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" exitCode=0 Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7"} Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"82656086a4a118e1102fc82f336330b684a7f0526916306ff71cb6795b454ce4"} Feb 16 13:23:48 crc kubenswrapper[4740]: I0216 13:23:48.146121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} Feb 16 13:23:50 crc kubenswrapper[4740]: I0216 13:23:50.164433 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" exitCode=0 Feb 16 13:23:50 crc kubenswrapper[4740]: I0216 13:23:50.164520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} Feb 16 13:23:51 crc kubenswrapper[4740]: I0216 13:23:51.176491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} Feb 16 13:23:51 crc kubenswrapper[4740]: I0216 13:23:51.198525 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bc555" podStartSLOduration=2.792385574 podStartE2EDuration="7.198505745s" podCreationTimestamp="2026-02-16 13:23:44 +0000 UTC" firstStartedPulling="2026-02-16 13:23:46.131736411 +0000 UTC m=+1853.508085132" lastFinishedPulling="2026-02-16 13:23:50.537856582 +0000 UTC m=+1857.914205303" observedRunningTime="2026-02-16 13:23:51.190183705 +0000 UTC m=+1858.566532426" watchObservedRunningTime="2026-02-16 13:23:51.198505745 +0000 UTC m=+1858.574854466" Feb 16 13:23:55 crc kubenswrapper[4740]: I0216 13:23:55.266688 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:55 crc kubenswrapper[4740]: I0216 13:23:55.267269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:56 crc kubenswrapper[4740]: I0216 13:23:56.313482 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bc555" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" probeResult="failure" output=< Feb 16 13:23:56 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:23:56 crc kubenswrapper[4740]: > Feb 16 13:24:04 crc kubenswrapper[4740]: I0216 13:24:04.302078 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerID="4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50" exitCode=0 Feb 16 13:24:04 crc kubenswrapper[4740]: I0216 13:24:04.302175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerDied","Data":"4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50"} Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.340141 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.407167 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.584236 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.709764 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.758219 4740 scope.go:117] "RemoveContainer" containerID="88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775236 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775264 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775293 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776319 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776444 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776473 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776523 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776546 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.783825 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784380 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784693 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.785773 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788527 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788852 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.789359 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.789546 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.790985 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.795290 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt" (OuterVolumeSpecName: "kube-api-access-x9mjt") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "kube-api-access-x9mjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.808649 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.816579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory" (OuterVolumeSpecName: "inventory") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879194 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879226 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879238 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879248 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879256 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879265 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879276 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879284 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879294 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879303 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879311 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879321 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879330 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879337 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerDied","Data":"dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8"} Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328750 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328706 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.460110 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:06 crc kubenswrapper[4740]: E0216 13:24:06.460684 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.460703 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.461002 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.461902 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469678 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469699 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469962 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.470109 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.470133 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.477071 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600877 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600913 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702680 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702841 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.703703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.708443 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.711460 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.713761 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.742431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.790331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.338176 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bc555" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" containerID="cri-o://d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" gracePeriod=2 Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.399359 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.797426 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925766 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.926605 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities" (OuterVolumeSpecName: "utilities") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.929162 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8" (OuterVolumeSpecName: "kube-api-access-fdfl8") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "kube-api-access-fdfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.028071 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.028249 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.082661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.129503 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.348974 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" exitCode=0 Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349098 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"82656086a4a118e1102fc82f336330b684a7f0526916306ff71cb6795b454ce4"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349112 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349132 4740 scope.go:117] "RemoveContainer" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.354210 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerStarted","Data":"def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.354252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerStarted","Data":"0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.379162 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" podStartSLOduration=1.880330781 podStartE2EDuration="2.379125514s" podCreationTimestamp="2026-02-16 13:24:06 +0000 UTC" firstStartedPulling="2026-02-16 13:24:07.41837076 +0000 UTC m=+1874.794719491" lastFinishedPulling="2026-02-16 13:24:07.917165503 +0000 UTC m=+1875.293514224" observedRunningTime="2026-02-16 13:24:08.368186753 +0000 UTC m=+1875.744535474" watchObservedRunningTime="2026-02-16 13:24:08.379125514 +0000 UTC m=+1875.755474235" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.383100 4740 scope.go:117] "RemoveContainer" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.403551 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.411445 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.417640 4740 scope.go:117] "RemoveContainer" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.443667 4740 scope.go:117] "RemoveContainer" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.450545 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": container with ID starting with d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2 not found: ID does not exist" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.450617 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} err="failed to get container status \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": rpc error: code = NotFound desc = could not find container \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": container with ID starting with d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2 not found: ID does not exist" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.450655 4740 scope.go:117] "RemoveContainer" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.451172 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": container with ID starting with 09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d not found: ID does not exist" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451239 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} err="failed to get container status \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": rpc error: code = NotFound desc = could not find container \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": container with ID starting with 09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d not found: ID does not exist" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451274 4740 scope.go:117] "RemoveContainer" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.451738 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": container with ID starting with b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7 not found: ID does not exist" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451771 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7"} err="failed to get container status \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": rpc error: code = NotFound desc = could not find container \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": container with ID starting with b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7 not found: ID does not exist" Feb 16 13:24:09 crc kubenswrapper[4740]: I0216 13:24:09.294930 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" path="/var/lib/kubelet/pods/f7370a76-dcf5-4db7-b2b8-7a142cbae00d/volumes" Feb 16 13:25:08 crc kubenswrapper[4740]: I0216 13:25:08.911476 4740 generic.go:334] "Generic (PLEG): container finished" podID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerID="def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04" exitCode=0 Feb 16 13:25:08 crc kubenswrapper[4740]: I0216 13:25:08.911574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerDied","Data":"def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04"} Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.296337 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416428 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416802 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416888 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.417100 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.423431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.423554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4" (OuterVolumeSpecName: "kube-api-access-swbb4") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "kube-api-access-swbb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.446530 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.447956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory" (OuterVolumeSpecName: "inventory") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.456040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520210 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520264 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520278 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520289 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520318 4740 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerDied","Data":"0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124"} Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934670 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934701 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032150 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032492 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032508 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032531 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-content" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032537 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-content" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-utilities" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032551 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-utilities" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032574 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032579 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032735 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032747 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.033331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040406 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040690 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.041066 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.041202 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.042278 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.051115 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233596 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.234496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.234547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.237984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.238302 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.238722 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.245287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.245562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.250851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.364935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.854650 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.863801 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.943575 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerStarted","Data":"d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0"} Feb 16 13:25:12 crc kubenswrapper[4740]: I0216 13:25:12.954570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerStarted","Data":"d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc"} Feb 16 13:25:12 crc kubenswrapper[4740]: I0216 13:25:12.985022 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" podStartSLOduration=1.54363062 podStartE2EDuration="1.984996287s" podCreationTimestamp="2026-02-16 13:25:11 +0000 UTC" firstStartedPulling="2026-02-16 13:25:11.863424126 +0000 UTC m=+1939.239772857" lastFinishedPulling="2026-02-16 13:25:12.304789803 +0000 UTC m=+1939.681138524" observedRunningTime="2026-02-16 13:25:12.980057393 +0000 UTC m=+1940.356406144" watchObservedRunningTime="2026-02-16 13:25:12.984996287 +0000 UTC m=+1940.361345048" Feb 16 13:25:45 crc kubenswrapper[4740]: I0216 13:25:45.575947 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:25:45 crc kubenswrapper[4740]: I0216 13:25:45.578790 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:25:58 crc kubenswrapper[4740]: I0216 13:25:58.401579 4740 generic.go:334] "Generic (PLEG): container finished" podID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerID="d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc" exitCode=0 Feb 16 13:25:58 crc kubenswrapper[4740]: I0216 13:25:58.401661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerDied","Data":"d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc"} Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.815994 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891699 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891909 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.897208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj" (OuterVolumeSpecName: "kube-api-access-q7vfj") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "kube-api-access-q7vfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.897420 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.917452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory" (OuterVolumeSpecName: "inventory") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.919353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.920447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.923372 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994109 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994136 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994146 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994156 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994164 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994176 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerDied","Data":"d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0"} Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422449 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422469 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.528780 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:00 crc kubenswrapper[4740]: E0216 13:26:00.530388 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.530466 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.530787 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.531402 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.533619 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.533652 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.534027 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.534215 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.535080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.549566 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.707881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708169 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.713037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.714273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.718613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.718853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.725110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.847572 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:01 crc kubenswrapper[4740]: I0216 13:26:01.396461 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:01 crc kubenswrapper[4740]: W0216 13:26:01.400714 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ab3e576_ab98_496c_a189_2e79796f9e98.slice/crio-6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7 WatchSource:0}: Error finding container 6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7: Status 404 returned error can't find the container with id 6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7 Feb 16 13:26:01 crc kubenswrapper[4740]: I0216 13:26:01.438353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerStarted","Data":"6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7"} Feb 16 13:26:02 crc kubenswrapper[4740]: I0216 13:26:02.454785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerStarted","Data":"00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7"} Feb 16 13:26:02 crc kubenswrapper[4740]: I0216 13:26:02.480625 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" podStartSLOduration=2.058793017 podStartE2EDuration="2.480591322s" podCreationTimestamp="2026-02-16 13:26:00 +0000 UTC" firstStartedPulling="2026-02-16 13:26:01.405687549 +0000 UTC m=+1988.782036270" lastFinishedPulling="2026-02-16 13:26:01.827485854 +0000 UTC m=+1989.203834575" observedRunningTime="2026-02-16 13:26:02.472962213 +0000 UTC m=+1989.849310934" watchObservedRunningTime="2026-02-16 13:26:02.480591322 +0000 UTC m=+1989.856940033" Feb 16 13:26:15 crc kubenswrapper[4740]: I0216 13:26:15.574746 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:26:15 crc kubenswrapper[4740]: I0216 13:26:15.575220 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.575437 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.576534 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.576892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.578177 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.578275 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" gracePeriod=600 Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898082 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" exitCode=0 Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898156 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898504 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:26:46 crc kubenswrapper[4740]: I0216 13:26:46.909581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} Feb 16 13:28:45 crc kubenswrapper[4740]: I0216 13:28:45.575544 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:28:45 crc kubenswrapper[4740]: I0216 13:28:45.576157 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:15 crc kubenswrapper[4740]: I0216 13:29:15.575431 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:29:15 crc kubenswrapper[4740]: I0216 13:29:15.575996 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.574971 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.575760 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.575865 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.577141 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.577267 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" gracePeriod=600 Feb 16 13:29:45 crc kubenswrapper[4740]: E0216 13:29:45.699866 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632395 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" exitCode=0 Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632761 4740 scope.go:117] "RemoveContainer" containerID="6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.633446 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:29:46 crc kubenswrapper[4740]: E0216 13:29:46.633678 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.373015 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.376927 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.390655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.467425 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.467783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.468448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570257 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570996 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.571061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.593774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.725651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.238448 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.683724 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" exitCode=0 Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.683842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4"} Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.685104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerStarted","Data":"3bb3773605d31d7edb6a9738bbc759b67a01af96bc53a8aeae49b88045e7b813"} Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.707468 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" exitCode=0 Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.707567 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82"} Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.712580 4740 generic.go:334] "Generic (PLEG): container finished" podID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerID="00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7" exitCode=0 Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.712642 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerDied","Data":"00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7"} Feb 16 13:29:54 crc kubenswrapper[4740]: I0216 13:29:54.722017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerStarted","Data":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} Feb 16 13:29:54 crc kubenswrapper[4740]: I0216 13:29:54.747055 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brjxc" podStartSLOduration=2.103185228 podStartE2EDuration="4.747034231s" podCreationTimestamp="2026-02-16 13:29:50 +0000 UTC" firstStartedPulling="2026-02-16 13:29:51.685584578 +0000 UTC m=+2219.061933309" lastFinishedPulling="2026-02-16 13:29:54.329433591 +0000 UTC m=+2221.705782312" observedRunningTime="2026-02-16 13:29:54.742349184 +0000 UTC m=+2222.118697905" watchObservedRunningTime="2026-02-16 13:29:54.747034231 +0000 UTC m=+2222.123382952" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.159428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264591 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264683 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264904 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.271651 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.271740 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4" (OuterVolumeSpecName: "kube-api-access-8ctr4") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "kube-api-access-8ctr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.295128 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.302901 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.305137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory" (OuterVolumeSpecName: "inventory") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369893 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369936 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369957 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369978 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.370013 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732846 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerDied","Data":"6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7"} Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.854408 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:55 crc kubenswrapper[4740]: E0216 13:29:55.855193 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.855223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.855504 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.856382 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858642 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858795 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858643 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.859987 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.860235 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.869565 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.891381 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987550 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089049 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089067 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.090146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.094788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.095513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.095546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.096053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.096535 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.105708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106175 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.109055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.180551 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.741711 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.744575 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerStarted","Data":"a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d"} Feb 16 13:29:57 crc kubenswrapper[4740]: I0216 13:29:57.755017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerStarted","Data":"f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b"} Feb 16 13:29:57 crc kubenswrapper[4740]: I0216 13:29:57.783545 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" podStartSLOduration=2.1966779770000002 podStartE2EDuration="2.783481094s" podCreationTimestamp="2026-02-16 13:29:55 +0000 UTC" firstStartedPulling="2026-02-16 13:29:56.725880311 +0000 UTC m=+2224.102229072" lastFinishedPulling="2026-02-16 13:29:57.312683448 +0000 UTC m=+2224.689032189" observedRunningTime="2026-02-16 13:29:57.782311097 +0000 UTC m=+2225.158659858" watchObservedRunningTime="2026-02-16 13:29:57.783481094 +0000 UTC m=+2225.159829855" Feb 16 13:29:58 crc kubenswrapper[4740]: I0216 13:29:58.281474 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:29:58 crc kubenswrapper[4740]: E0216 13:29:58.281723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.140167 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.142046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.145774 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.154424 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.154719 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.286653 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.286768 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.287090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.389197 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.390037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.390338 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.391039 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.404525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.409786 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.470474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.726361 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.726694 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.776937 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.827592 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: W0216 13:30:00.925953 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode01f374e_7e34_4175_b300_1d1a5f95c85e.slice/crio-332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd WatchSource:0}: Error finding container 332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd: Status 404 returned error can't find the container with id 332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.936042 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.016553 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.793979 4740 generic.go:334] "Generic (PLEG): container finished" podID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerID="b4252b006f9315f40457f664ad289bfcd573b8f4acc4b78060ada515a3efb79d" exitCode=0 Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.794104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerDied","Data":"b4252b006f9315f40457f664ad289bfcd573b8f4acc4b78060ada515a3efb79d"} Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.794445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerStarted","Data":"332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd"} Feb 16 13:30:02 crc kubenswrapper[4740]: I0216 13:30:02.805394 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brjxc" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" containerID="cri-o://6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" gracePeriod=2 Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.178388 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.276377 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352202 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352329 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.353135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.357577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.358863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9" (OuterVolumeSpecName: "kube-api-access-49pq9") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "kube-api-access-49pq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454458 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455009 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455034 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455044 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities" (OuterVolumeSpecName: "utilities") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.460121 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z" (OuterVolumeSpecName: "kube-api-access-2rp9z") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "kube-api-access-2rp9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.512373 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556785 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556833 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556844 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817267 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerDied","Data":"332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821188 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" exitCode=0 Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821255 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"3bb3773605d31d7edb6a9738bbc759b67a01af96bc53a8aeae49b88045e7b813"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821295 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821312 4740 scope.go:117] "RemoveContainer" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.868857 4740 scope.go:117] "RemoveContainer" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.874639 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.882769 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.904066 4740 scope.go:117] "RemoveContainer" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920074 4740 scope.go:117] "RemoveContainer" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920425 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": container with ID starting with 6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c not found: ID does not exist" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920461 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} err="failed to get container status \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": rpc error: code = NotFound desc = could not find container \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": container with ID starting with 6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c not found: ID does not exist" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920483 4740 scope.go:117] "RemoveContainer" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920697 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": container with ID starting with f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82 not found: ID does not exist" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920717 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82"} err="failed to get container status \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": rpc error: code = NotFound desc = could not find container \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": container with ID starting with f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82 not found: ID does not exist" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920730 4740 scope.go:117] "RemoveContainer" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920985 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": container with ID starting with ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4 not found: ID does not exist" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.921006 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4"} err="failed to get container status \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": rpc error: code = NotFound desc = could not find container \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": container with ID starting with ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4 not found: ID does not exist" Feb 16 13:30:04 crc kubenswrapper[4740]: I0216 13:30:04.258657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 13:30:04 crc kubenswrapper[4740]: I0216 13:30:04.269269 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.294498 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" path="/var/lib/kubelet/pods/9062ffdd-baa5-4ebc-8f40-353fac0e821e/volumes" Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.296055 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" path="/var/lib/kubelet/pods/fbcc12dc-03f3-4820-865b-e43d66da1be5/volumes" Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.965449 4740 scope.go:117] "RemoveContainer" containerID="907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae" Feb 16 13:30:11 crc kubenswrapper[4740]: I0216 13:30:11.281475 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:11 crc kubenswrapper[4740]: E0216 13:30:11.282292 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:23 crc kubenswrapper[4740]: I0216 13:30:23.287079 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:23 crc kubenswrapper[4740]: E0216 13:30:23.287801 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:36 crc kubenswrapper[4740]: I0216 13:30:36.282267 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:36 crc kubenswrapper[4740]: E0216 13:30:36.283278 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:50 crc kubenswrapper[4740]: I0216 13:30:50.287836 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:50 crc kubenswrapper[4740]: E0216 13:30:50.289141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:05 crc kubenswrapper[4740]: I0216 13:31:05.281875 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:05 crc kubenswrapper[4740]: E0216 13:31:05.282853 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:17 crc kubenswrapper[4740]: I0216 13:31:17.281697 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:17 crc kubenswrapper[4740]: E0216 13:31:17.282547 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:31 crc kubenswrapper[4740]: I0216 13:31:31.281472 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:31 crc kubenswrapper[4740]: E0216 13:31:31.282565 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.183637 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184434 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-utilities" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184446 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-utilities" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184466 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184472 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184515 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-content" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184521 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-content" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184533 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184539 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184701 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184718 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.209388 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.217197 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382144 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382435 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382640 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485563 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485657 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.486723 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.487283 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.505709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.538745 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.005552 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710149 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" exitCode=0 Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835"} Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710525 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"2dcbb0582701463a2dee9946fec569414499594c16cd1e7030fbd328e5d7fb94"} Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.712908 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:31:37 crc kubenswrapper[4740]: I0216 13:31:37.724975 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} Feb 16 13:31:38 crc kubenswrapper[4740]: I0216 13:31:38.741121 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" exitCode=0 Feb 16 13:31:38 crc kubenswrapper[4740]: I0216 13:31:38.741219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} Feb 16 13:31:39 crc kubenswrapper[4740]: I0216 13:31:39.752990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} Feb 16 13:31:39 crc kubenswrapper[4740]: I0216 13:31:39.784604 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9gmt7" podStartSLOduration=2.253947753 podStartE2EDuration="4.784579631s" podCreationTimestamp="2026-02-16 13:31:35 +0000 UTC" firstStartedPulling="2026-02-16 13:31:36.712398306 +0000 UTC m=+2324.088747047" lastFinishedPulling="2026-02-16 13:31:39.243030204 +0000 UTC m=+2326.619378925" observedRunningTime="2026-02-16 13:31:39.775069346 +0000 UTC m=+2327.151418077" watchObservedRunningTime="2026-02-16 13:31:39.784579631 +0000 UTC m=+2327.160928352" Feb 16 13:31:42 crc kubenswrapper[4740]: I0216 13:31:42.281546 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:42 crc kubenswrapper[4740]: E0216 13:31:42.282328 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.539912 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.540273 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.586468 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.853917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.901127 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:47 crc kubenswrapper[4740]: I0216 13:31:47.843180 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9gmt7" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" containerID="cri-o://2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" gracePeriod=2 Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.328442 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447364 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447446 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447559 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.448998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities" (OuterVolumeSpecName: "utilities") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.454251 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k" (OuterVolumeSpecName: "kube-api-access-chm2k") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "kube-api-access-chm2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.470417 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550108 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550147 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855879 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" exitCode=0 Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"2dcbb0582701463a2dee9946fec569414499594c16cd1e7030fbd328e5d7fb94"} Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.856011 4740 scope.go:117] "RemoveContainer" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.856200 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.885244 4740 scope.go:117] "RemoveContainer" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.905927 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.915481 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.927982 4740 scope.go:117] "RemoveContainer" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.968644 4740 scope.go:117] "RemoveContainer" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.969549 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": container with ID starting with 2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa not found: ID does not exist" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.969602 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} err="failed to get container status \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": rpc error: code = NotFound desc = could not find container \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": container with ID starting with 2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa not found: ID does not exist" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.969635 4740 scope.go:117] "RemoveContainer" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.970134 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": container with ID starting with f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a not found: ID does not exist" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970157 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} err="failed to get container status \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": rpc error: code = NotFound desc = could not find container \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": container with ID starting with f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a not found: ID does not exist" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970171 4740 scope.go:117] "RemoveContainer" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.970425 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": container with ID starting with 3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835 not found: ID does not exist" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970452 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835"} err="failed to get container status \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": rpc error: code = NotFound desc = could not find container \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": container with ID starting with 3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835 not found: ID does not exist" Feb 16 13:31:49 crc kubenswrapper[4740]: I0216 13:31:49.300994 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" path="/var/lib/kubelet/pods/40f84c73-01b8-48f4-8bd7-30a4be00f6c5/volumes" Feb 16 13:31:54 crc kubenswrapper[4740]: I0216 13:31:54.281219 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:54 crc kubenswrapper[4740]: E0216 13:31:54.281927 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:07 crc kubenswrapper[4740]: I0216 13:32:07.281567 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:07 crc kubenswrapper[4740]: E0216 13:32:07.283015 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:15 crc kubenswrapper[4740]: I0216 13:32:15.371238 4740 generic.go:334] "Generic (PLEG): container finished" podID="58706e85-268c-4ce0-b1e4-82dd86872568" containerID="f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b" exitCode=0 Feb 16 13:32:15 crc kubenswrapper[4740]: I0216 13:32:15.372138 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerDied","Data":"f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b"} Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.807740 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931885 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932568 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932607 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932630 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932659 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.938069 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.945597 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf" (OuterVolumeSpecName: "kube-api-access-vvrcf") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "kube-api-access-vvrcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.962086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.962923 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory" (OuterVolumeSpecName: "inventory") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.966868 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.969258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.972516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.980081 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.980439 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.984104 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.987127 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.036964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037581 4740 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037661 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037705 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037718 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037730 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037741 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037754 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037764 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037777 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037787 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392237 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerDied","Data":"a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d"} Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392296 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392350 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513250 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513715 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-content" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513735 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-content" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513770 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513779 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513793 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513802 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513862 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-utilities" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513871 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-utilities" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.514127 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.514168 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.515215 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.517419 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.517420 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519207 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519241 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519623 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.521615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651374 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651748 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.753954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754072 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754144 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754279 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.762207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.774211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.832642 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:18 crc kubenswrapper[4740]: I0216 13:32:18.375065 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:18 crc kubenswrapper[4740]: I0216 13:32:18.403431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerStarted","Data":"d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e"} Feb 16 13:32:19 crc kubenswrapper[4740]: I0216 13:32:19.433000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerStarted","Data":"ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957"} Feb 16 13:32:19 crc kubenswrapper[4740]: I0216 13:32:19.477939 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" podStartSLOduration=2.000476662 podStartE2EDuration="2.477917634s" podCreationTimestamp="2026-02-16 13:32:17 +0000 UTC" firstStartedPulling="2026-02-16 13:32:18.383651804 +0000 UTC m=+2365.760000525" lastFinishedPulling="2026-02-16 13:32:18.861092776 +0000 UTC m=+2366.237441497" observedRunningTime="2026-02-16 13:32:19.47522393 +0000 UTC m=+2366.851572661" watchObservedRunningTime="2026-02-16 13:32:19.477917634 +0000 UTC m=+2366.854266355" Feb 16 13:32:21 crc kubenswrapper[4740]: I0216 13:32:21.281650 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:21 crc kubenswrapper[4740]: E0216 13:32:21.282235 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:36 crc kubenswrapper[4740]: I0216 13:32:36.281683 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:36 crc kubenswrapper[4740]: E0216 13:32:36.282666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:48 crc kubenswrapper[4740]: I0216 13:32:48.281754 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:48 crc kubenswrapper[4740]: E0216 13:32:48.282735 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:59 crc kubenswrapper[4740]: I0216 13:32:59.281625 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:59 crc kubenswrapper[4740]: E0216 13:32:59.282712 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:12 crc kubenswrapper[4740]: I0216 13:33:12.281208 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:12 crc kubenswrapper[4740]: E0216 13:33:12.282063 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:27 crc kubenswrapper[4740]: I0216 13:33:27.281575 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:27 crc kubenswrapper[4740]: E0216 13:33:27.283454 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:38 crc kubenswrapper[4740]: I0216 13:33:38.282263 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:38 crc kubenswrapper[4740]: E0216 13:33:38.283479 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:51 crc kubenswrapper[4740]: I0216 13:33:51.280907 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:51 crc kubenswrapper[4740]: E0216 13:33:51.281683 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:04 crc kubenswrapper[4740]: I0216 13:34:04.281990 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:04 crc kubenswrapper[4740]: E0216 13:34:04.282820 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.210965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.213525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.233967 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376376 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376624 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478094 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478494 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.479208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.479521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.498496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.537556 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.799877 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: W0216 13:34:08.806557 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a5b86e1_c137_4d7d_a184_e6f1ac9fa48b.slice/crio-1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85 WatchSource:0}: Error finding container 1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85: Status 404 returned error can't find the container with id 1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85 Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.411839 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4" exitCode=0 Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.412032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4"} Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.412252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85"} Feb 16 13:34:10 crc kubenswrapper[4740]: I0216 13:34:10.422719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1"} Feb 16 13:34:11 crc kubenswrapper[4740]: I0216 13:34:11.437343 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1" exitCode=0 Feb 16 13:34:11 crc kubenswrapper[4740]: I0216 13:34:11.437418 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1"} Feb 16 13:34:12 crc kubenswrapper[4740]: I0216 13:34:12.447057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09"} Feb 16 13:34:12 crc kubenswrapper[4740]: I0216 13:34:12.474265 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46rsw" podStartSLOduration=1.758728372 podStartE2EDuration="4.474244708s" podCreationTimestamp="2026-02-16 13:34:08 +0000 UTC" firstStartedPulling="2026-02-16 13:34:09.413878782 +0000 UTC m=+2476.790227503" lastFinishedPulling="2026-02-16 13:34:12.129395108 +0000 UTC m=+2479.505743839" observedRunningTime="2026-02-16 13:34:12.465489905 +0000 UTC m=+2479.841838626" watchObservedRunningTime="2026-02-16 13:34:12.474244708 +0000 UTC m=+2479.850593429" Feb 16 13:34:15 crc kubenswrapper[4740]: I0216 13:34:15.282839 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:15 crc kubenswrapper[4740]: E0216 13:34:15.283656 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.537920 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.538572 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.608341 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:19 crc kubenswrapper[4740]: I0216 13:34:19.552673 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:19 crc kubenswrapper[4740]: I0216 13:34:19.619155 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:21 crc kubenswrapper[4740]: I0216 13:34:21.529159 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46rsw" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" containerID="cri-o://bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" gracePeriod=2 Feb 16 13:34:22 crc kubenswrapper[4740]: I0216 13:34:22.541093 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4740]: I0216 13:34:22.541152 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09"} Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.080154 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092134 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092438 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.093760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities" (OuterVolumeSpecName: "utilities") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.104729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694" (OuterVolumeSpecName: "kube-api-access-mh694") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "kube-api-access-mh694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.195403 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.195445 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.212353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.296662 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85"} Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554935 4740 scope.go:117] "RemoveContainer" containerID="bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554960 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.585030 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.590896 4740 scope.go:117] "RemoveContainer" containerID="f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.598379 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.628168 4740 scope.go:117] "RemoveContainer" containerID="91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4" Feb 16 13:34:25 crc kubenswrapper[4740]: I0216 13:34:25.293656 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" path="/var/lib/kubelet/pods/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b/volumes" Feb 16 13:34:27 crc kubenswrapper[4740]: I0216 13:34:27.281056 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:27 crc kubenswrapper[4740]: E0216 13:34:27.281886 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:38 crc kubenswrapper[4740]: I0216 13:34:38.692283 4740 generic.go:334] "Generic (PLEG): container finished" podID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerID="ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957" exitCode=0 Feb 16 13:34:38 crc kubenswrapper[4740]: I0216 13:34:38.692375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerDied","Data":"ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957"} Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.131173 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249005 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249936 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249975 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.250040 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.254796 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs" (OuterVolumeSpecName: "kube-api-access-mnlxs") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "kube-api-access-mnlxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.255409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.280322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory" (OuterVolumeSpecName: "inventory") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.282802 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.283249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.285050 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.291422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352271 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352306 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352319 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352331 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352348 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352362 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352374 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerDied","Data":"d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e"} Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710985 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710871 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:34:42 crc kubenswrapper[4740]: I0216 13:34:42.282064 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:42 crc kubenswrapper[4740]: E0216 13:34:42.283125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:55 crc kubenswrapper[4740]: I0216 13:34:55.281551 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:55 crc kubenswrapper[4740]: I0216 13:34:55.907391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.683866 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.684969 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-utilities" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.684992 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-utilities" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685025 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-content" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685035 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-content" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685057 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685067 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685092 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685102 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685377 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685413 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.688139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691651 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691947 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.692267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mh4bs" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.695105 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.834876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835356 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835387 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.836114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938395 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938571 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938859 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.939666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.939705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.945957 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.946188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.953958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.956014 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.967004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:39 crc kubenswrapper[4740]: I0216 13:35:39.023435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:35:39 crc kubenswrapper[4740]: I0216 13:35:39.463143 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:40 crc kubenswrapper[4740]: I0216 13:35:40.317402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerStarted","Data":"985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842"} Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.208221 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.208893 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzmdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(90aac50c-27a6-4ebd-b207-d3bc439dc1fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.210994 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.603769 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.291500 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.299319 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.315848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.390074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.391309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.391603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.493856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.493919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.513673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.642284 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:11 crc kubenswrapper[4740]: I0216 13:36:11.508900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:11 crc kubenswrapper[4740]: W0216 13:36:11.511308 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08bd798f_5a43_4738_9a77_e66a59468ba6.slice/crio-a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5 WatchSource:0}: Error finding container a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5: Status 404 returned error can't find the container with id a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5 Feb 16 13:36:11 crc kubenswrapper[4740]: I0216 13:36:11.624196 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5"} Feb 16 13:36:12 crc kubenswrapper[4740]: I0216 13:36:12.634638 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" exitCode=0 Feb 16 13:36:12 crc kubenswrapper[4740]: I0216 13:36:12.634711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35"} Feb 16 13:36:13 crc kubenswrapper[4740]: I0216 13:36:13.647472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} Feb 16 13:36:14 crc kubenswrapper[4740]: I0216 13:36:14.663233 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" exitCode=0 Feb 16 13:36:14 crc kubenswrapper[4740]: I0216 13:36:14.663326 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} Feb 16 13:36:15 crc kubenswrapper[4740]: I0216 13:36:15.672282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} Feb 16 13:36:15 crc kubenswrapper[4740]: I0216 13:36:15.692670 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwtx2" podStartSLOduration=3.275788767 podStartE2EDuration="5.692639143s" podCreationTimestamp="2026-02-16 13:36:10 +0000 UTC" firstStartedPulling="2026-02-16 13:36:12.637935587 +0000 UTC m=+2600.014284308" lastFinishedPulling="2026-02-16 13:36:15.054785963 +0000 UTC m=+2602.431134684" observedRunningTime="2026-02-16 13:36:15.687349689 +0000 UTC m=+2603.063698460" watchObservedRunningTime="2026-02-16 13:36:15.692639143 +0000 UTC m=+2603.068987904" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.643678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.644319 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.685026 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.714558 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.780915 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.928228 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:21 crc kubenswrapper[4740]: I0216 13:36:21.753084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerStarted","Data":"026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1"} Feb 16 13:36:21 crc kubenswrapper[4740]: I0216 13:36:21.780101 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.540448053 podStartE2EDuration="44.780084181s" podCreationTimestamp="2026-02-16 13:35:37 +0000 UTC" firstStartedPulling="2026-02-16 13:35:39.471713712 +0000 UTC m=+2566.848062433" lastFinishedPulling="2026-02-16 13:36:20.71134984 +0000 UTC m=+2608.087698561" observedRunningTime="2026-02-16 13:36:21.775240141 +0000 UTC m=+2609.151588872" watchObservedRunningTime="2026-02-16 13:36:21.780084181 +0000 UTC m=+2609.156432902" Feb 16 13:36:22 crc kubenswrapper[4740]: I0216 13:36:22.760941 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hwtx2" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" containerID="cri-o://d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" gracePeriod=2 Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.290448 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353403 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353595 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.354503 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities" (OuterVolumeSpecName: "utilities") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.359719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w" (OuterVolumeSpecName: "kube-api-access-nj49w") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "kube-api-access-nj49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.422634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456268 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456303 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456314 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774108 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" exitCode=0 Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774153 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774182 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5"} Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774242 4740 scope.go:117] "RemoveContainer" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.814690 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.821536 4740 scope.go:117] "RemoveContainer" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.822805 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.854212 4740 scope.go:117] "RemoveContainer" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905168 4740 scope.go:117] "RemoveContainer" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.905632 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": container with ID starting with d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a not found: ID does not exist" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905694 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} err="failed to get container status \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": rpc error: code = NotFound desc = could not find container \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": container with ID starting with d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a not found: ID does not exist" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905731 4740 scope.go:117] "RemoveContainer" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.906122 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": container with ID starting with b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28 not found: ID does not exist" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906152 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} err="failed to get container status \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": rpc error: code = NotFound desc = could not find container \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": container with ID starting with b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28 not found: ID does not exist" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906176 4740 scope.go:117] "RemoveContainer" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.906405 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": container with ID starting with af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35 not found: ID does not exist" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906427 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35"} err="failed to get container status \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": rpc error: code = NotFound desc = could not find container \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": container with ID starting with af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35 not found: ID does not exist" Feb 16 13:36:25 crc kubenswrapper[4740]: I0216 13:36:25.290593 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" path="/var/lib/kubelet/pods/08bd798f-5a43-4738-9a77-e66a59468ba6/volumes" Feb 16 13:37:15 crc kubenswrapper[4740]: I0216 13:37:15.575257 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:37:15 crc kubenswrapper[4740]: I0216 13:37:15.575867 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:37:45 crc kubenswrapper[4740]: I0216 13:37:45.575153 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:37:45 crc kubenswrapper[4740]: I0216 13:37:45.576047 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.574758 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.575368 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.575413 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.576062 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.576119 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" gracePeriod=600 Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858659 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" exitCode=0 Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858748 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:38:16 crc kubenswrapper[4740]: I0216 13:38:16.869064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} Feb 16 13:40:15 crc kubenswrapper[4740]: I0216 13:40:15.574939 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:40:15 crc kubenswrapper[4740]: I0216 13:40:15.575553 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.415691 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416922 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-utilities" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.416943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-utilities" Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416975 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.416984 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416997 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-content" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.417005 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-content" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.417229 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.427271 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.435954 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.601662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.602020 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.602070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.704129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.704137 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.726856 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.760744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:31 crc kubenswrapper[4740]: I0216 13:40:31.242864 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114509 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" exitCode=0 Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63"} Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"fe3dc1554969873d17f2cb07996884b9f394eeb1f06f36890aaf0b0e3ffbc114"} Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.117325 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:40:34 crc kubenswrapper[4740]: I0216 13:40:34.132572 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} Feb 16 13:40:37 crc kubenswrapper[4740]: I0216 13:40:37.161490 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" exitCode=0 Feb 16 13:40:37 crc kubenswrapper[4740]: I0216 13:40:37.161593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} Feb 16 13:40:38 crc kubenswrapper[4740]: I0216 13:40:38.176173 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} Feb 16 13:40:38 crc kubenswrapper[4740]: I0216 13:40:38.205323 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q592g" podStartSLOduration=2.742681792 podStartE2EDuration="8.205301444s" podCreationTimestamp="2026-02-16 13:40:30 +0000 UTC" firstStartedPulling="2026-02-16 13:40:32.117034999 +0000 UTC m=+2859.493383720" lastFinishedPulling="2026-02-16 13:40:37.579654651 +0000 UTC m=+2864.956003372" observedRunningTime="2026-02-16 13:40:38.198595555 +0000 UTC m=+2865.574944306" watchObservedRunningTime="2026-02-16 13:40:38.205301444 +0000 UTC m=+2865.581650165" Feb 16 13:40:40 crc kubenswrapper[4740]: I0216 13:40:40.761826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:40 crc kubenswrapper[4740]: I0216 13:40:40.762198 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:41 crc kubenswrapper[4740]: I0216 13:40:41.805080 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q592g" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" probeResult="failure" output=< Feb 16 13:40:41 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:40:41 crc kubenswrapper[4740]: > Feb 16 13:40:45 crc kubenswrapper[4740]: I0216 13:40:45.575126 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:40:45 crc kubenswrapper[4740]: I0216 13:40:45.576139 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:40:50 crc kubenswrapper[4740]: I0216 13:40:50.803605 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:50 crc kubenswrapper[4740]: I0216 13:40:50.850174 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:51 crc kubenswrapper[4740]: I0216 13:40:51.041543 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.315964 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q592g" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" containerID="cri-o://11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" gracePeriod=2 Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.800015 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.875486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.883190 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.883247 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.885422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities" (OuterVolumeSpecName: "utilities") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.896692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s" (OuterVolumeSpecName: "kube-api-access-tmc6s") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "kube-api-access-tmc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.926577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985896 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985931 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985943 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329424 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" exitCode=0 Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"fe3dc1554969873d17f2cb07996884b9f394eeb1f06f36890aaf0b0e3ffbc114"} Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329508 4740 scope.go:117] "RemoveContainer" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.361651 4740 scope.go:117] "RemoveContainer" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.372005 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.382962 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.388243 4740 scope.go:117] "RemoveContainer" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.431918 4740 scope.go:117] "RemoveContainer" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.432365 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": container with ID starting with 11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1 not found: ID does not exist" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.432415 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} err="failed to get container status \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": rpc error: code = NotFound desc = could not find container \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": container with ID starting with 11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1 not found: ID does not exist" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.432452 4740 scope.go:117] "RemoveContainer" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.432964 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": container with ID starting with b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f not found: ID does not exist" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433013 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} err="failed to get container status \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": rpc error: code = NotFound desc = could not find container \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": container with ID starting with b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f not found: ID does not exist" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433041 4740 scope.go:117] "RemoveContainer" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.433399 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": container with ID starting with d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63 not found: ID does not exist" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433435 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63"} err="failed to get container status \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": rpc error: code = NotFound desc = could not find container \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": container with ID starting with d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63 not found: ID does not exist" Feb 16 13:40:55 crc kubenswrapper[4740]: I0216 13:40:55.292166 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68afe4b-5647-465b-b601-f16548640dcd" path="/var/lib/kubelet/pods/d68afe4b-5647-465b-b601-f16548640dcd/volumes" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.574679 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.575352 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.575429 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.578598 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.578735 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" gracePeriod=600 Feb 16 13:41:15 crc kubenswrapper[4740]: E0216 13:41:15.705790 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548135 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" exitCode=0 Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548288 4740 scope.go:117] "RemoveContainer" containerID="6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.549109 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:16 crc kubenswrapper[4740]: E0216 13:41:16.549571 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:29 crc kubenswrapper[4740]: I0216 13:41:29.282177 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:29 crc kubenswrapper[4740]: E0216 13:41:29.285167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:43 crc kubenswrapper[4740]: I0216 13:41:43.292124 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:43 crc kubenswrapper[4740]: E0216 13:41:43.292876 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:54 crc kubenswrapper[4740]: I0216 13:41:54.281139 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:54 crc kubenswrapper[4740]: E0216 13:41:54.281924 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:08 crc kubenswrapper[4740]: I0216 13:42:08.281182 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:08 crc kubenswrapper[4740]: E0216 13:42:08.282100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.905779 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906563 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906579 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906610 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-utilities" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906619 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-utilities" Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906627 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-content" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906636 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-content" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906894 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.909002 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.926689 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078432 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.079464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.079641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.109423 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.239001 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.685855 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109423 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" exitCode=0 Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599"} Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109709 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerStarted","Data":"e23c166928f79bac061c06865364209cfe408791af5880f73140fc249f485603"} Feb 16 13:42:13 crc kubenswrapper[4740]: I0216 13:42:13.119098 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" exitCode=0 Feb 16 13:42:13 crc kubenswrapper[4740]: I0216 13:42:13.119168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3"} Feb 16 13:42:14 crc kubenswrapper[4740]: I0216 13:42:14.131170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerStarted","Data":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} Feb 16 13:42:14 crc kubenswrapper[4740]: I0216 13:42:14.153959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgzdd" podStartSLOduration=2.728885505 podStartE2EDuration="4.153936399s" podCreationTimestamp="2026-02-16 13:42:10 +0000 UTC" firstStartedPulling="2026-02-16 13:42:12.111067356 +0000 UTC m=+2959.487416077" lastFinishedPulling="2026-02-16 13:42:13.53611824 +0000 UTC m=+2960.912466971" observedRunningTime="2026-02-16 13:42:14.148988235 +0000 UTC m=+2961.525336996" watchObservedRunningTime="2026-02-16 13:42:14.153936399 +0000 UTC m=+2961.530285130" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.239446 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.240979 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.291735 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.247451 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.282089 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:22 crc kubenswrapper[4740]: E0216 13:42:22.282515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.299536 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.229078 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgzdd" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" containerID="cri-o://c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" gracePeriod=2 Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.707120 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.870786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.870917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.871072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.871862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities" (OuterVolumeSpecName: "utilities") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.876035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n" (OuterVolumeSpecName: "kube-api-access-49z5n") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "kube-api-access-49z5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.893767 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973801 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973956 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973973 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240791 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" exitCode=0 Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240894 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"e23c166928f79bac061c06865364209cfe408791af5880f73140fc249f485603"} Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240916 4740 scope.go:117] "RemoveContainer" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240959 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.270825 4740 scope.go:117] "RemoveContainer" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.300383 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.300960 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.309369 4740 scope.go:117] "RemoveContainer" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.356218 4740 scope.go:117] "RemoveContainer" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357043 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": container with ID starting with c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c not found: ID does not exist" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357123 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} err="failed to get container status \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": rpc error: code = NotFound desc = could not find container \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": container with ID starting with c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c not found: ID does not exist" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357158 4740 scope.go:117] "RemoveContainer" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357676 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": container with ID starting with 583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3 not found: ID does not exist" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357709 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3"} err="failed to get container status \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": rpc error: code = NotFound desc = could not find container \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": container with ID starting with 583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3 not found: ID does not exist" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357731 4740 scope.go:117] "RemoveContainer" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357972 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": container with ID starting with da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599 not found: ID does not exist" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.358006 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599"} err="failed to get container status \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": rpc error: code = NotFound desc = could not find container \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": container with ID starting with da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599 not found: ID does not exist" Feb 16 13:42:27 crc kubenswrapper[4740]: I0216 13:42:27.302104 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" path="/var/lib/kubelet/pods/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab/volumes" Feb 16 13:42:36 crc kubenswrapper[4740]: I0216 13:42:36.281772 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:36 crc kubenswrapper[4740]: E0216 13:42:36.282772 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:49 crc kubenswrapper[4740]: I0216 13:42:49.281881 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:49 crc kubenswrapper[4740]: E0216 13:42:49.282729 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:02 crc kubenswrapper[4740]: I0216 13:43:02.282583 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:02 crc kubenswrapper[4740]: E0216 13:43:02.283540 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:14 crc kubenswrapper[4740]: I0216 13:43:14.281374 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:14 crc kubenswrapper[4740]: E0216 13:43:14.282179 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:26 crc kubenswrapper[4740]: I0216 13:43:26.281837 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:26 crc kubenswrapper[4740]: E0216 13:43:26.282926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:38 crc kubenswrapper[4740]: I0216 13:43:38.281198 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:38 crc kubenswrapper[4740]: E0216 13:43:38.282078 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:49 crc kubenswrapper[4740]: I0216 13:43:49.281894 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:49 crc kubenswrapper[4740]: E0216 13:43:49.282701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:03 crc kubenswrapper[4740]: I0216 13:44:03.289770 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:03 crc kubenswrapper[4740]: E0216 13:44:03.290621 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:14 crc kubenswrapper[4740]: I0216 13:44:14.281521 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:14 crc kubenswrapper[4740]: E0216 13:44:14.282395 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:29 crc kubenswrapper[4740]: I0216 13:44:29.281344 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:29 crc kubenswrapper[4740]: E0216 13:44:29.282431 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:43 crc kubenswrapper[4740]: I0216 13:44:43.289390 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:43 crc kubenswrapper[4740]: E0216 13:44:43.290177 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:58 crc kubenswrapper[4740]: I0216 13:44:58.282481 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:58 crc kubenswrapper[4740]: E0216 13:44:58.283747 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.157516 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158322 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158337 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158356 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158364 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158386 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158394 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158631 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.159401 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.161792 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.162097 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.166224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308605 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308663 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.409967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.410070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.410178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.412142 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.432349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.436292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.534226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.981909 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693148 4740 generic.go:334] "Generic (PLEG): container finished" podID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerID="1c71581268eb82f32a7fdeaef37e4f1f954561bbb4a5fc80b6dcbfa17823b9f2" exitCode=0 Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerDied","Data":"1c71581268eb82f32a7fdeaef37e4f1f954561bbb4a5fc80b6dcbfa17823b9f2"} Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerStarted","Data":"5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe"} Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.072185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.170939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.171026 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.171234 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.172100 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.176306 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.176916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv" (OuterVolumeSpecName: "kube-api-access-n79bv") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "kube-api-access-n79bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273791 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273857 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273872 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerDied","Data":"5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe"} Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709630 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709270 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:04 crc kubenswrapper[4740]: I0216 13:45:04.157035 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:45:04 crc kubenswrapper[4740]: I0216 13:45:04.166885 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:45:05 crc kubenswrapper[4740]: I0216 13:45:05.292093 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" path="/var/lib/kubelet/pods/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb/volumes" Feb 16 13:45:06 crc kubenswrapper[4740]: I0216 13:45:06.383620 4740 scope.go:117] "RemoveContainer" containerID="db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405" Feb 16 13:45:10 crc kubenswrapper[4740]: I0216 13:45:10.282036 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:10 crc kubenswrapper[4740]: E0216 13:45:10.283254 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:21 crc kubenswrapper[4740]: I0216 13:45:21.282056 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:21 crc kubenswrapper[4740]: E0216 13:45:21.283032 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:36 crc kubenswrapper[4740]: I0216 13:45:36.281151 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:36 crc kubenswrapper[4740]: E0216 13:45:36.281877 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:48 crc kubenswrapper[4740]: I0216 13:45:48.281562 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:48 crc kubenswrapper[4740]: E0216 13:45:48.283595 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:00 crc kubenswrapper[4740]: I0216 13:46:00.281947 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:00 crc kubenswrapper[4740]: E0216 13:46:00.283409 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:12 crc kubenswrapper[4740]: I0216 13:46:12.282230 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:12 crc kubenswrapper[4740]: E0216 13:46:12.283471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:19 crc kubenswrapper[4740]: I0216 13:46:19.349582 4740 generic.go:334] "Generic (PLEG): container finished" podID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerID="026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1" exitCode=0 Feb 16 13:46:19 crc kubenswrapper[4740]: I0216 13:46:19.351947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerDied","Data":"026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1"} Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.773888 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.946686 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947178 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947844 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947991 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948313 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948243 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data" (OuterVolumeSpecName: "config-data") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.949113 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.949219 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.952479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt" (OuterVolumeSpecName: "kube-api-access-xzmdt") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "kube-api-access-xzmdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.955060 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.961434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.973492 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.982370 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.986496 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.002367 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051193 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051230 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051242 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051255 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051268 4740 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051305 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051318 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.079609 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.152869 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerDied","Data":"985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842"} Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370663 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370409 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.983918 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:24 crc kubenswrapper[4740]: E0216 13:46:24.984865 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.984880 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: E0216 13:46:24.984911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.984921 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985097 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985112 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.987346 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mh4bs" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.001478 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.150917 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.151032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253341 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253860 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.277865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.286211 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.287957 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.306309 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.808589 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.810871 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:26 crc kubenswrapper[4740]: I0216 13:46:26.415019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} Feb 16 13:46:26 crc kubenswrapper[4740]: I0216 13:46:26.418256 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4a270185-f419-49b5-aa81-b6d254269d2d","Type":"ContainerStarted","Data":"3d5443898f4defc2be87624d016591d90cdbec539e1a8a90123b543107b5b099"} Feb 16 13:46:27 crc kubenswrapper[4740]: I0216 13:46:27.428252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4a270185-f419-49b5-aa81-b6d254269d2d","Type":"ContainerStarted","Data":"7f0b8d0e04744f9d567adf866192349134d3a900c10908a00883a27662a0346a"} Feb 16 13:46:27 crc kubenswrapper[4740]: I0216 13:46:27.449573 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.328801988 podStartE2EDuration="3.449553022s" podCreationTimestamp="2026-02-16 13:46:24 +0000 UTC" firstStartedPulling="2026-02-16 13:46:25.808359739 +0000 UTC m=+3213.184708460" lastFinishedPulling="2026-02-16 13:46:26.929110773 +0000 UTC m=+3214.305459494" observedRunningTime="2026-02-16 13:46:27.442266364 +0000 UTC m=+3214.818615095" watchObservedRunningTime="2026-02-16 13:46:27.449553022 +0000 UTC m=+3214.825901743" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.865237 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.880959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.881663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885222 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8952r"/"openshift-service-ca.crt" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885375 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8952r"/"default-dockercfg-qxw8g" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885492 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8952r"/"kube-root-ca.crt" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.039661 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.040453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.142670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.142849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.143261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.171235 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.208870 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.818753 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:50 crc kubenswrapper[4740]: I0216 13:46:50.637797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"42871b5417d626a3836de593705b9a20d4dadb26b92dee49e33620324c162cd3"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.690102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.690605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.705420 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8952r/must-gather-5m4h9" podStartSLOduration=2.865555875 podStartE2EDuration="8.705399715s" podCreationTimestamp="2026-02-16 13:46:48 +0000 UTC" firstStartedPulling="2026-02-16 13:46:49.827415985 +0000 UTC m=+3237.203764706" lastFinishedPulling="2026-02-16 13:46:55.667259825 +0000 UTC m=+3243.043608546" observedRunningTime="2026-02-16 13:46:56.703707702 +0000 UTC m=+3244.080056423" watchObservedRunningTime="2026-02-16 13:46:56.705399715 +0000 UTC m=+3244.081748436" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.403001 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.405114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.440099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.440276 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542219 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542376 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.564985 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.722685 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: W0216 13:46:59.767288 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacde68ab_8b2c_4d56_9743_27d87e4829d5.slice/crio-d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a WatchSource:0}: Error finding container d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a: Status 404 returned error can't find the container with id d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a Feb 16 13:47:00 crc kubenswrapper[4740]: I0216 13:47:00.736534 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerStarted","Data":"d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a"} Feb 16 13:47:11 crc kubenswrapper[4740]: I0216 13:47:11.835491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerStarted","Data":"63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f"} Feb 16 13:47:11 crc kubenswrapper[4740]: I0216 13:47:11.867108 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8952r/crc-debug-468db" podStartSLOduration=1.213017839 podStartE2EDuration="12.867088302s" podCreationTimestamp="2026-02-16 13:46:59 +0000 UTC" firstStartedPulling="2026-02-16 13:46:59.77471058 +0000 UTC m=+3247.151059311" lastFinishedPulling="2026-02-16 13:47:11.428781053 +0000 UTC m=+3258.805129774" observedRunningTime="2026-02-16 13:47:11.85935286 +0000 UTC m=+3259.235701591" watchObservedRunningTime="2026-02-16 13:47:11.867088302 +0000 UTC m=+3259.243437023" Feb 16 13:47:52 crc kubenswrapper[4740]: I0216 13:47:52.404594 4740 generic.go:334] "Generic (PLEG): container finished" podID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerID="63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f" exitCode=0 Feb 16 13:47:52 crc kubenswrapper[4740]: I0216 13:47:52.404686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerDied","Data":"63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f"} Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.523635 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.551279 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.560786 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.668954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"acde68ab-8b2c-4d56-9743-27d87e4829d5\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.669583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"acde68ab-8b2c-4d56-9743-27d87e4829d5\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.669726 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host" (OuterVolumeSpecName: "host") pod "acde68ab-8b2c-4d56-9743-27d87e4829d5" (UID: "acde68ab-8b2c-4d56-9743-27d87e4829d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.670214 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.676743 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts" (OuterVolumeSpecName: "kube-api-access-td7ts") pod "acde68ab-8b2c-4d56-9743-27d87e4829d5" (UID: "acde68ab-8b2c-4d56-9743-27d87e4829d5"). InnerVolumeSpecName "kube-api-access-td7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.772714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.422592 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.422644 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.745454 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:54 crc kubenswrapper[4740]: E0216 13:47:54.745983 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.745999 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.746207 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.746982 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.892890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.893080 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.994738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.994902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.995010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.025498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.069415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: W0216 13:47:55.179524 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f683e3c_2ae1_4a2a_a377_a8e489b9fdd6.slice/crio-e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461 WatchSource:0}: Error finding container e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461: Status 404 returned error can't find the container with id e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461 Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.294087 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" path="/var/lib/kubelet/pods/acde68ab-8b2c-4d56-9743-27d87e4829d5/volumes" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.432684 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-chqpb" event={"ID":"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6","Type":"ContainerStarted","Data":"e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461"} Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.442470 4740 generic.go:334] "Generic (PLEG): container finished" podID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerID="62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48" exitCode=0 Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.442569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-chqpb" event={"ID":"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6","Type":"ContainerDied","Data":"62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48"} Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.914750 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.922464 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.558309 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610575 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host" (OuterVolumeSpecName: "host") pod "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" (UID: "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.611322 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.617255 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt" (OuterVolumeSpecName: "kube-api-access-r45bt") pod "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" (UID: "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6"). InnerVolumeSpecName "kube-api-access-r45bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.713711 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.192936 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:47:58 crc kubenswrapper[4740]: E0216 13:47:58.193382 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.193402 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.193637 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.194284 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.224269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.224333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.325857 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.325925 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.326156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.347716 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.461987 4740 scope.go:117] "RemoveContainer" containerID="62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.462062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.510724 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.295413 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" path="/var/lib/kubelet/pods/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6/volumes" Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472709 4740 generic.go:334] "Generic (PLEG): container finished" podID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerID="b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2" exitCode=0 Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472778 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-vj8wt" event={"ID":"6030077c-d3f7-4009-8ed6-f05b287984cb","Type":"ContainerDied","Data":"b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2"} Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-vj8wt" event={"ID":"6030077c-d3f7-4009-8ed6-f05b287984cb","Type":"ContainerStarted","Data":"3bd7ad1a62877701995657e797fe0b555f43bf058dc16cc503005fb1810502fa"} Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.506857 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.516014 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.583179 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.769552 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"6030077c-d3f7-4009-8ed6-f05b287984cb\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.769693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"6030077c-d3f7-4009-8ed6-f05b287984cb\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.770173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host" (OuterVolumeSpecName: "host") pod "6030077c-d3f7-4009-8ed6-f05b287984cb" (UID: "6030077c-d3f7-4009-8ed6-f05b287984cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.770312 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.784007 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw" (OuterVolumeSpecName: "kube-api-access-5m2cw") pod "6030077c-d3f7-4009-8ed6-f05b287984cb" (UID: "6030077c-d3f7-4009-8ed6-f05b287984cb"). InnerVolumeSpecName "kube-api-access-5m2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.872764 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.292131 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" path="/var/lib/kubelet/pods/6030077c-d3f7-4009-8ed6-f05b287984cb/volumes" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.491841 4740 scope.go:117] "RemoveContainer" containerID="b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.491860 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:48:16 crc kubenswrapper[4740]: I0216 13:48:16.829176 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.013911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.023584 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.066081 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.235587 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.243262 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.474871 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8_8e96214f-a46e-451a-97d9-d448c66826f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.540576 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-central-agent/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.585670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-notification-agent/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.663921 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/proxy-httpd/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.697967 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/sg-core/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.823503 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.926328 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api-log/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.001107 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/cinder-scheduler/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.080428 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/probe/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.146479 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg_3691fefa-c161-4670-bae7-ddde074e2892/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.303476 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw_928b9f1f-3a42-47e3-b895-756f66452ebf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.392695 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.558179 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.642079 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g_fe15334d-14c1-4670-89fe-3b7d4864b782/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.660637 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/dnsmasq-dns/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.815425 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-httpd/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.870218 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.022744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-httpd/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.054777 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.247029 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.375525 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh_3e117ddc-9ff8-414d-859b-0a16b4846029/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.558787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.653732 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-42525_bf3c8754-68ef-4956-a95b-c6751d81b5bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.878099 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_05c7ea6d-5a24-4b21-851c-e7d51fa61a38/kube-state-metrics/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.890344 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cc7d69b6f-dmv77_e68475b5-404f-48fc-a05a-ea18135e837c/keystone-api/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.256726 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjh65_2ab3e576-ab98-496c-a189-2e79796f9e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.607391 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-httpd/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.657776 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-api/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.837691 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w_3a7cecfd-1168-4187-a70c-7b2151ff214f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.401904 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-log/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.511310 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-api/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.598367 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_07256285-a907-4822-80dc-b5f5866d437f/nova-cell0-conductor-conductor/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.715442 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4465f42a-9c2a-4aa7-9e45-fa28f78cddd7/nova-cell1-conductor-conductor/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.784143 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_94da2ded-002e-4aa6-9828-404bee84c146/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.032855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lhwdj_58706e85-268c-4ce0-b1e4-82dd86872568/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.200155 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-log/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.425366 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e3ba9a19-9826-4c43-9907-8cd8f1a4272a/nova-scheduler-scheduler/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.588568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.775135 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.808782 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/galera/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.000424 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.211413 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-metadata/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.222939 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.253280 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/galera/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.432130 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4f78f448-6577-48d1-b077-01e42c14758c/openstackclient/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.548952 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b4j4m_ad1b2300-a42b-4a99-b186-7661bb410a36/openstack-network-exporter/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.679328 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.045944 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.049495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovs-vswitchd/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.139312 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.296502 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qnt79_04335a5d-7cac-4a47-982c-70cae9db69ff/ovn-controller/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.387656 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zzdbk_d66e0695-3544-4fd0-9d34-42bea96ea9de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.528440 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.622053 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/ovn-northd/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.731744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.798434 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/ovsdbserver-nb/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.928474 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.995566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/ovsdbserver-sb/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.228704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-api/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.256803 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-log/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.325352 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.523659 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/rabbitmq/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.548453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.616474 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.767853 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.795342 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/rabbitmq/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.896709 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g_9fa622a2-4774-4038-b9ec-ec4bc7f57a46/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.068889 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4c988_2abfe09c-2736-49b3-b4e5-fb0e30deb510/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.149090 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m_1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.326698 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-r8mds_981b1e60-57d5-4a6b-8531-3fd31dd46fa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.429921 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-87s8t_8c5c2438-cfba-41a9-b429-80c9ce563348/ssh-known-hosts-edpm-deployment/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.635906 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-server/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.813738 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-httpd/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.939840 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-auditor/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.946373 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4rgvg_8a769496-58ca-4540-9dc4-bd8df7e682fc/swift-ring-rebalance/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.004789 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-reaper/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.149020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.231456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.286333 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-auditor/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.333506 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.347690 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.485627 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-updater/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.732456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-auditor/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.775065 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-expirer/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.831730 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.888122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.959072 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-updater/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.995026 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/rsync/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.106798 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/swift-recon-cron/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.286590 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99lsn_590a1858-7b00-48c8-a2b4-dae7b652ed89/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.332544 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_90aac50c-27a6-4ebd-b207-d3bc439dc1fe/tempest-tests-tempest-tests-runner/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.470061 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4a270185-f419-49b5-aa81-b6d254269d2d/test-operator-logs-container/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.523109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w42sv_5add9653-c644-42d7-bd4d-10ecb8f84a90/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.090356 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:31 crc kubenswrapper[4740]: E0216 13:48:31.090878 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.090894 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.091083 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.092516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.100041 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132823 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234202 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.256112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.410013 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.234394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897316 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" exitCode=0 Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734"} Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897647 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"73ae9f14263612def2e0943ea4c2fc593503fdf00ace78a46c28b24e9e4e133b"} Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.907186 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:32.916560 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:32.959947 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423490 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527517 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527631 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.528757 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.529125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.577548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.756131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.433048 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:34 crc kubenswrapper[4740]: W0216 13:48:34.448626 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe664efb_cef2_414d_a946_72a7cc4afd4c.slice/crio-7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3 WatchSource:0}: Error finding container 7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3: Status 404 returned error can't find the container with id 7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3 Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.968181 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.973980 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9" exitCode=0 Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.974038 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9"} Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.974071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3"} Feb 16 13:48:35 crc kubenswrapper[4740]: I0216 13:48:35.987105 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" exitCode=0 Feb 16 13:48:35 crc kubenswrapper[4740]: I0216 13:48:35.987446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} Feb 16 13:48:36 crc kubenswrapper[4740]: I0216 13:48:36.540863 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_16622824-15d7-4ff1-8eac-85fe5d8da9db/memcached/0.log" Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.018574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.020290 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0"} Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.049525 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wrfg" podStartSLOduration=2.5543570669999998 podStartE2EDuration="9.049506177s" podCreationTimestamp="2026-02-16 13:48:31 +0000 UTC" firstStartedPulling="2026-02-16 13:48:32.902828754 +0000 UTC m=+3340.279177485" lastFinishedPulling="2026-02-16 13:48:39.397977874 +0000 UTC m=+3346.774326595" observedRunningTime="2026-02-16 13:48:40.049197847 +0000 UTC m=+3347.425546558" watchObservedRunningTime="2026-02-16 13:48:40.049506177 +0000 UTC m=+3347.425854898" Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.030535 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0" exitCode=0 Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.030654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0"} Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.410824 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.410870 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.077069 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2"} Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.113257 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8mx6k" podStartSLOduration=3.6422352890000003 podStartE2EDuration="10.113238127s" podCreationTimestamp="2026-02-16 13:48:32 +0000 UTC" firstStartedPulling="2026-02-16 13:48:34.976044339 +0000 UTC m=+3342.352393060" lastFinishedPulling="2026-02-16 13:48:41.447047187 +0000 UTC m=+3348.823395898" observedRunningTime="2026-02-16 13:48:42.105161865 +0000 UTC m=+3349.481510586" watchObservedRunningTime="2026-02-16 13:48:42.113238127 +0000 UTC m=+3349.489586848" Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.485171 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wrfg" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" probeResult="failure" output=< Feb 16 13:48:42 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:48:42 crc kubenswrapper[4740]: > Feb 16 13:48:43 crc kubenswrapper[4740]: I0216 13:48:43.756839 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:43 crc kubenswrapper[4740]: I0216 13:48:43.756908 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:44 crc kubenswrapper[4740]: I0216 13:48:44.809273 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8mx6k" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" probeResult="failure" output=< Feb 16 13:48:44 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:48:44 crc kubenswrapper[4740]: > Feb 16 13:48:45 crc kubenswrapper[4740]: I0216 13:48:45.575071 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:48:45 crc kubenswrapper[4740]: I0216 13:48:45.575477 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.491655 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.535031 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.743449 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.167485 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wrfg" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" containerID="cri-o://da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" gracePeriod=2 Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.680642 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.801444 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841505 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841770 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.842783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities" (OuterVolumeSpecName: "utilities") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.853531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.860445 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67" (OuterVolumeSpecName: "kube-api-access-9wr67") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "kube-api-access-9wr67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.945634 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.945682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.984389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.047441 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.176949 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" exitCode=0 Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177020 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177015 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"73ae9f14263612def2e0943ea4c2fc593503fdf00ace78a46c28b24e9e4e133b"} Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177098 4740 scope.go:117] "RemoveContainer" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.204336 4740 scope.go:117] "RemoveContainer" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.216122 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.226703 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.231977 4740 scope.go:117] "RemoveContainer" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.287746 4740 scope.go:117] "RemoveContainer" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288128 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": container with ID starting with da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774 not found: ID does not exist" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288159 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} err="failed to get container status \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": rpc error: code = NotFound desc = could not find container \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": container with ID starting with da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774 not found: ID does not exist" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288179 4740 scope.go:117] "RemoveContainer" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288580 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": container with ID starting with 75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6 not found: ID does not exist" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288613 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} err="failed to get container status \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": rpc error: code = NotFound desc = could not find container \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": container with ID starting with 75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6 not found: ID does not exist" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288634 4740 scope.go:117] "RemoveContainer" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288999 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": container with ID starting with 5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734 not found: ID does not exist" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.289035 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734"} err="failed to get container status \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": rpc error: code = NotFound desc = could not find container \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": container with ID starting with 5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734 not found: ID does not exist" Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.293322 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" path="/var/lib/kubelet/pods/f8ef2ee5-6259-45b0-9e5e-b34778f39415/volumes" Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.944796 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.947146 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8mx6k" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" containerID="cri-o://2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" gracePeriod=2 Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.199762 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" exitCode=0 Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.199851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2"} Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.427652 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.525436 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities" (OuterVolumeSpecName: "utilities") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.538203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6" (OuterVolumeSpecName: "kube-api-access-5xtj6") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "kube-api-access-5xtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.605057 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.605101 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.606321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.706334 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.211972 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3"} Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.212305 4740 scope.go:117] "RemoveContainer" containerID="2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.212473 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.236964 4740 scope.go:117] "RemoveContainer" containerID="f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.262773 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.265541 4740 scope.go:117] "RemoveContainer" containerID="06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.271845 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.294988 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" path="/var/lib/kubelet/pods/be664efb-cef2-414d-a946-72a7cc4afd4c/volumes" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.936745 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.101464 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.148320 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.228173 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.336213 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.353910 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/extract/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.380453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.919998 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-9kqqk_069bdc0e-d9e1-4e93-a6fc-8aa439550dd0/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.272196 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-9xbzr_90321508-9bb9-458e-ada0-001c779161c1/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.365659 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kk4mh_7f22cc6e-3761-4336-ab1d-74d9fd88432c/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.624708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-nl26x_fdf72675-c282-4f45-ad93-19aa643dcff8/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.019010 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-rpbmb_f0032304-8799-4a85-964f-2017bfd2dbc8/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.261453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-v28lz_3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.276431 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-s8wc5_4eba30c7-3dab-4b8f-8a22-2dae642a6ac5/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.509704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-z2m7j_fce48c02-3aa2-404b-a9a4-7ba789835be0/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.560758 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-44wdn_7f932811-4449-440a-b4c7-4817bfb33dd3/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.794382 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7gw4t_a49c1d67-8cf7-4429-ac73-da13d129304d/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.295595 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-7t92r_121ee83b-e7f1-4302-9455-4cc6f53a07a5/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.444535 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-fn4g2_ba6767b2-e03c-4c12-880d-90bd809d9b48/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.734415 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7_76134787-0eff-47bd-982e-16c2c4f98f19/manager/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.327259 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f746469c7-kzds7_4c82699a-266c-43ce-acce-32c8aea26c10/operator/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.527147 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qzt4t_7fe65e33-ae2e-4f40-b686-454192d6b538/registry-server/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.821632 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-gclp4_6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.053501 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-64xmt_c6400043-1325-4af3-8c79-4b383441668c/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.153859 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pbpdw_00e4da3c-6d3d-459a-86c2-01a4cdb81e51/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.262775 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qttct_3e6434b1-64ba-481f-b001-8a465254dc0a/operator/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.364867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6865b_519c5b9e-ed4f-4cba-a731-70a22209f642/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.659016 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cnxhk_04f86073-3515-4d62-a02a-c63d06ecdaaa/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.816631 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-58cw4_7666c640-a9f4-4e09-b79c-7fd31116bd79/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.911491 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-pbkbj_001719d5-3a51-4f6b-b316-9e98f53ed575/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.941346 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd688d8fc-7shgl_e749615e-a716-4e6e-8830-947b128e4e58/manager/0.log" Feb 16 13:49:05 crc kubenswrapper[4740]: I0216 13:49:05.297279 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jsfjx_d6090007-0c13-4ea2-823c-3d95bb336fd8/manager/0.log" Feb 16 13:49:15 crc kubenswrapper[4740]: I0216 13:49:15.575368 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:49:15 crc kubenswrapper[4740]: I0216 13:49:15.575999 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.297164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m9krp_2eef055f-7504-4f20-817e-afcd1bb6f996/control-plane-machine-set-operator/0.log" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.477049 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/machine-api-operator/0.log" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.511786 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/kube-rbac-proxy/0.log" Feb 16 13:49:36 crc kubenswrapper[4740]: I0216 13:49:36.977078 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kflg5_8b35e0e1-44f6-4481-a71e-98e3f8462bb7/cert-manager-controller/0.log" Feb 16 13:49:37 crc kubenswrapper[4740]: I0216 13:49:37.108216 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hpjbh_beeada69-65c5-434a-af02-8e6b23e13138/cert-manager-cainjector/0.log" Feb 16 13:49:37 crc kubenswrapper[4740]: I0216 13:49:37.166429 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25fnr_a68020b3-17ff-43dc-b17d-0845940c0758/cert-manager-webhook/0.log" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.574905 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.575536 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.575588 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.576391 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.576471 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" gracePeriod=600 Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.793617 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" exitCode=0 Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.793690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.794139 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:49:46 crc kubenswrapper[4740]: I0216 13:49:46.805691 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.563466 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nrnvc_edcdba40-6318-4d29-a235-829e94bc8089/nmstate-console-plugin/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.733885 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v88gn_3c0ee084-492b-46da-82b3-9c9a8e1715fd/nmstate-handler/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.801006 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/kube-rbac-proxy/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.852009 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/nmstate-metrics/0.log" Feb 16 13:49:49 crc kubenswrapper[4740]: I0216 13:49:49.001504 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-76m6k_afdcb81a-db2a-4c04-b73b-30facf2d10af/nmstate-operator/0.log" Feb 16 13:49:49 crc kubenswrapper[4740]: I0216 13:49:49.077202 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-r9sw6_b7ffd056-af44-4007-8de6-cc707902d4c4/nmstate-webhook/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.492180 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/kube-rbac-proxy/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.659374 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.671950 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/controller/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859155 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859389 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.936082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.089214 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.090046 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.099636 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.113607 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.274528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.282655 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.293022 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.305166 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/controller/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.432856 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.436459 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.564312 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy-frr/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.630571 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.853171 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-spwnh_2e220608-2271-4260-bc94-e4d206c718d4/frr-k8s-webhook-server/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.909222 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75b694c59-wkpkw_97f25eec-68aa-4b48-b40a-08ce0599d525/manager/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.113865 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7887f4bfcc-9grrx_4163a038-60ca-4e8e-bf45-028b04101fc9/webhook-server/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.314504 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/kube-rbac-proxy/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.746751 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/speaker/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.825163 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.245109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.485204 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.498027 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.519383 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.697708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.721254 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.740958 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/extract/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.861806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.064086 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.073721 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.099173 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.287942 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.298023 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.500180 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.779203 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/registry-server/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.799166 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.808520 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.824820 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.026571 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.032169 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.237586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.474568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/registry-server/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.483260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.485026 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.499972 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.671071 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.679066 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.696678 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/extract/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.865528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xsssg_db2dd193-ab4e-4011-988a-d516f2da367e/marketplace-operator/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.876128 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.063109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.064578 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.087920 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.233845 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.250354 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.368096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/registry-server/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.428032 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.616347 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.653525 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.653891 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.865671 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.916638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:32 crc kubenswrapper[4740]: I0216 13:50:32.257210 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/registry-server/0.log" Feb 16 13:51:45 crc kubenswrapper[4740]: I0216 13:51:45.575861 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:51:45 crc kubenswrapper[4740]: I0216 13:51:45.576416 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:15 crc kubenswrapper[4740]: I0216 13:52:15.574957 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:52:15 crc kubenswrapper[4740]: I0216 13:52:15.577280 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.110805 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" exitCode=0 Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.110864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerDied","Data":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.113188 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.553090 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/gather/0.log" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.850425 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851528 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851543 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851566 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851575 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851590 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851599 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851620 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851627 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851645 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851653 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851665 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851673 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851911 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851932 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.853677 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.863973 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964562 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.065921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.094426 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.172637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.265435 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.265894 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8952r/must-gather-5m4h9" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" containerID="cri-o://ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" gracePeriod=2 Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.274160 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.792697 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.841621 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/copy/0.log" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.842107 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.998891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"f7facfd3-bee7-437b-9628-e135acc0d16a\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:26.999966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"f7facfd3-bee7-437b-9628-e135acc0d16a\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.004536 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97" (OuterVolumeSpecName: "kube-api-access-n5w97") pod "f7facfd3-bee7-437b-9628-e135acc0d16a" (UID: "f7facfd3-bee7-437b-9628-e135acc0d16a"). InnerVolumeSpecName "kube-api-access-n5w97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.101909 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.144729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f7facfd3-bee7-437b-9628-e135acc0d16a" (UID: "f7facfd3-bee7-437b-9628-e135acc0d16a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.201860 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" exitCode=0 Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.201948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454"} Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.202012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"bacf2e1b1d4c403ec4e0b591dbf8f6a87d501268512564c30a6cd79804fa2c73"} Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.203327 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.205466 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.205586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/copy/0.log" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206313 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" exitCode=143 Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206377 4740 scope.go:117] "RemoveContainer" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.298007 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.304738 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" path="/var/lib/kubelet/pods/f7facfd3-bee7-437b-9628-e135acc0d16a/volumes" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.465997 4740 scope.go:117] "RemoveContainer" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.466429 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": container with ID starting with ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f not found: ID does not exist" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466493 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f"} err="failed to get container status \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": rpc error: code = NotFound desc = could not find container \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": container with ID starting with ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f not found: ID does not exist" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466519 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.466877 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": container with ID starting with 934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80 not found: ID does not exist" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466928 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} err="failed to get container status \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": rpc error: code = NotFound desc = could not find container \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": container with ID starting with 934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80 not found: ID does not exist" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.659482 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.660026 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660056 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.660088 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660101 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660361 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.662678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.676972 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.943131 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.995298 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:28 crc kubenswrapper[4740]: I0216 13:52:28.225261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} Feb 16 13:52:28 crc kubenswrapper[4740]: I0216 13:52:28.531978 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239094 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" exitCode=0 Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5"} Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerStarted","Data":"bf712a5dae40b5da6a8e2079a8e234db26f56cc4f02bd1cd509ede7dd909ba9a"} Feb 16 13:52:30 crc kubenswrapper[4740]: I0216 13:52:30.249829 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" exitCode=0 Feb 16 13:52:30 crc kubenswrapper[4740]: I0216 13:52:30.249857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.260457 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" exitCode=0 Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.260509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.263443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.299882 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2bf8z" podStartSLOduration=2.836956883 podStartE2EDuration="6.299863799s" podCreationTimestamp="2026-02-16 13:52:25 +0000 UTC" firstStartedPulling="2026-02-16 13:52:27.204750268 +0000 UTC m=+3574.581098989" lastFinishedPulling="2026-02-16 13:52:30.667657184 +0000 UTC m=+3578.044005905" observedRunningTime="2026-02-16 13:52:31.294053498 +0000 UTC m=+3578.670402219" watchObservedRunningTime="2026-02-16 13:52:31.299863799 +0000 UTC m=+3578.676212510" Feb 16 13:52:32 crc kubenswrapper[4740]: I0216 13:52:32.284154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerStarted","Data":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} Feb 16 13:52:32 crc kubenswrapper[4740]: I0216 13:52:32.307732 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhfmz" podStartSLOduration=2.867420351 podStartE2EDuration="5.307711317s" podCreationTimestamp="2026-02-16 13:52:27 +0000 UTC" firstStartedPulling="2026-02-16 13:52:29.241953245 +0000 UTC m=+3576.618301966" lastFinishedPulling="2026-02-16 13:52:31.682244211 +0000 UTC m=+3579.058592932" observedRunningTime="2026-02-16 13:52:32.300966296 +0000 UTC m=+3579.677315027" watchObservedRunningTime="2026-02-16 13:52:32.307711317 +0000 UTC m=+3579.684060038" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.172856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.173404 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.227417 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.413773 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:37 crc kubenswrapper[4740]: I0216 13:52:37.996028 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:37 crc kubenswrapper[4740]: I0216 13:52:37.996411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:38 crc kubenswrapper[4740]: I0216 13:52:38.040563 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:38 crc kubenswrapper[4740]: I0216 13:52:38.410476 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:40 crc kubenswrapper[4740]: I0216 13:52:40.843927 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:40 crc kubenswrapper[4740]: I0216 13:52:40.844308 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2bf8z" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" containerID="cri-o://2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" gracePeriod=2 Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.290718 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380361 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380630 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.382611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities" (OuterVolumeSpecName: "utilities") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388244 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" exitCode=0 Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"bacf2e1b1d4c403ec4e0b591dbf8f6a87d501268512564c30a6cd79804fa2c73"} Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388384 4740 scope.go:117] "RemoveContainer" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388388 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.389241 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf" (OuterVolumeSpecName: "kube-api-access-qndbf") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "kube-api-access-qndbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.442350 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.445258 4740 scope.go:117] "RemoveContainer" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.468250 4740 scope.go:117] "RemoveContainer" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482823 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482858 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482868 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.526465 4740 scope.go:117] "RemoveContainer" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527236 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": container with ID starting with 2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e not found: ID does not exist" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527271 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} err="failed to get container status \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": rpc error: code = NotFound desc = could not find container \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": container with ID starting with 2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527299 4740 scope.go:117] "RemoveContainer" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527592 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": container with ID starting with 536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1 not found: ID does not exist" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527622 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} err="failed to get container status \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": rpc error: code = NotFound desc = could not find container \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": container with ID starting with 536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1 not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527638 4740 scope.go:117] "RemoveContainer" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527876 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": container with ID starting with ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454 not found: ID does not exist" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527900 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454"} err="failed to get container status \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": rpc error: code = NotFound desc = could not find container \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": container with ID starting with ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454 not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.733321 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.741637 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.456019 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.456267 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhfmz" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" containerID="cri-o://48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" gracePeriod=2 Feb 16 13:52:42 crc kubenswrapper[4740]: E0216 13:52:42.661335 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c2b3e3_9a6e_4895_841d_f8be511fec31.slice/crio-48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.923049 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013709 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013777 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013889 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.014708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities" (OuterVolumeSpecName: "utilities") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.021284 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx" (OuterVolumeSpecName: "kube-api-access-ggpbx") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "kube-api-access-ggpbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.041516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116033 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116069 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116086 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.294381 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" path="/var/lib/kubelet/pods/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba/volumes" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420106 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" exitCode=0 Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420159 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"bf712a5dae40b5da6a8e2079a8e234db26f56cc4f02bd1cd509ede7dd909ba9a"} Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420226 4740 scope.go:117] "RemoveContainer" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.446696 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.464660 4740 scope.go:117] "RemoveContainer" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.464899 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.487452 4740 scope.go:117] "RemoveContainer" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531116 4740 scope.go:117] "RemoveContainer" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.531786 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": container with ID starting with 48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed not found: ID does not exist" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531845 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} err="failed to get container status \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": rpc error: code = NotFound desc = could not find container \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": container with ID starting with 48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed not found: ID does not exist" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531871 4740 scope.go:117] "RemoveContainer" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.532367 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": container with ID starting with 9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0 not found: ID does not exist" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532398 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0"} err="failed to get container status \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": rpc error: code = NotFound desc = could not find container \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": container with ID starting with 9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0 not found: ID does not exist" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532416 4740 scope.go:117] "RemoveContainer" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.532824 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": container with ID starting with e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5 not found: ID does not exist" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532851 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5"} err="failed to get container status \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": rpc error: code = NotFound desc = could not find container \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": container with ID starting with e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5 not found: ID does not exist" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.297290 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" path="/var/lib/kubelet/pods/f6c2b3e3-9a6e-4895-841d-f8be511fec31/volumes" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575416 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575477 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575514 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.576222 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.576277 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" gracePeriod=600 Feb 16 13:52:45 crc kubenswrapper[4740]: E0216 13:52:45.722102 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451505 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" exitCode=0 Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451577 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451656 4740 scope.go:117] "RemoveContainer" containerID="5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.452275 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:52:46 crc kubenswrapper[4740]: E0216 13:52:46.452623 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:52:58 crc kubenswrapper[4740]: I0216 13:52:58.281948 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:52:58 crc kubenswrapper[4740]: E0216 13:52:58.283085 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:12 crc kubenswrapper[4740]: I0216 13:53:12.281965 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:12 crc kubenswrapper[4740]: E0216 13:53:12.283239 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:27 crc kubenswrapper[4740]: I0216 13:53:27.283082 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:27 crc kubenswrapper[4740]: E0216 13:53:27.284441 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:41 crc kubenswrapper[4740]: I0216 13:53:41.281887 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:41 crc kubenswrapper[4740]: E0216 13:53:41.282975 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:56 crc kubenswrapper[4740]: I0216 13:53:56.280698 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:56 crc kubenswrapper[4740]: E0216 13:53:56.281565 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:06 crc kubenswrapper[4740]: I0216 13:54:06.684010 4740 scope.go:117] "RemoveContainer" containerID="63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f" Feb 16 13:54:11 crc kubenswrapper[4740]: I0216 13:54:11.281713 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:11 crc kubenswrapper[4740]: E0216 13:54:11.282722 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:24 crc kubenswrapper[4740]: I0216 13:54:24.281802 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:24 crc kubenswrapper[4740]: E0216 13:54:24.283039 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:37 crc kubenswrapper[4740]: I0216 13:54:37.282499 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:37 crc kubenswrapper[4740]: E0216 13:54:37.283646 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:51 crc kubenswrapper[4740]: I0216 13:54:51.281850 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:51 crc kubenswrapper[4740]: E0216 13:54:51.282763 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:05 crc kubenswrapper[4740]: I0216 13:55:05.282924 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:05 crc kubenswrapper[4740]: E0216 13:55:05.284480 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:17 crc kubenswrapper[4740]: I0216 13:55:17.281983 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:17 crc kubenswrapper[4740]: E0216 13:55:17.284913 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758907 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758919 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758936 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758951 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758958 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758970 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758977 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758991 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758997 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.759031 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759038 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759270 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759292 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.760270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2mt6s"/"openshift-service-ca.crt" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2mt6s"/"default-dockercfg-jlmx2" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762800 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2mt6s"/"kube-root-ca.crt" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.773107 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.773158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.782057 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.904123 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.093353 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.570228 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.916318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23"} Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.916667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"bce4b8bf3a7474d0370ba27f79c31d25416fb471e68cabe14bceb14676468ad5"} Feb 16 13:55:22 crc kubenswrapper[4740]: I0216 13:55:22.926559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74"} Feb 16 13:55:22 crc kubenswrapper[4740]: I0216 13:55:22.948278 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2mt6s/must-gather-88jwg" podStartSLOduration=2.948257947 podStartE2EDuration="2.948257947s" podCreationTimestamp="2026-02-16 13:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:22.943675004 +0000 UTC m=+3750.320023735" watchObservedRunningTime="2026-02-16 13:55:22.948257947 +0000 UTC m=+3750.324606668" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.267766 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.270046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.353347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.353969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.455862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.455999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.456363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.478566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.600084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: W0216 13:55:25.625995 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76887d5b_6380_4648_940a_bb025db77fc9.slice/crio-75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243 WatchSource:0}: Error finding container 75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243: Status 404 returned error can't find the container with id 75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243 Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.952959 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerStarted","Data":"5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1"} Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.953546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerStarted","Data":"75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243"} Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.984496 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" podStartSLOduration=0.98447209 podStartE2EDuration="984.47209ms" podCreationTimestamp="2026-02-16 13:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:25.974449608 +0000 UTC m=+3753.350798329" watchObservedRunningTime="2026-02-16 13:55:25.98447209 +0000 UTC m=+3753.360820811" Feb 16 13:55:29 crc kubenswrapper[4740]: I0216 13:55:29.282289 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:29 crc kubenswrapper[4740]: E0216 13:55:29.284214 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:40 crc kubenswrapper[4740]: I0216 13:55:40.281277 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:40 crc kubenswrapper[4740]: E0216 13:55:40.282160 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:52 crc kubenswrapper[4740]: I0216 13:55:52.281375 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:52 crc kubenswrapper[4740]: E0216 13:55:52.282068 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:00 crc kubenswrapper[4740]: I0216 13:56:00.248301 4740 generic.go:334] "Generic (PLEG): container finished" podID="76887d5b-6380-4648-940a-bb025db77fc9" containerID="5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1" exitCode=0 Feb 16 13:56:00 crc kubenswrapper[4740]: I0216 13:56:00.248400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerDied","Data":"5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1"} Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.374483 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.414506 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.431803 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"76887d5b-6380-4648-940a-bb025db77fc9\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"76887d5b-6380-4648-940a-bb025db77fc9\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519635 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host" (OuterVolumeSpecName: "host") pod "76887d5b-6380-4648-940a-bb025db77fc9" (UID: "76887d5b-6380-4648-940a-bb025db77fc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.520202 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.531479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf" (OuterVolumeSpecName: "kube-api-access-4vjrf") pod "76887d5b-6380-4648-940a-bb025db77fc9" (UID: "76887d5b-6380-4648-940a-bb025db77fc9"). InnerVolumeSpecName "kube-api-access-4vjrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.626656 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.291348 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.291754 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.621657 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:02 crc kubenswrapper[4740]: E0216 13:56:02.622073 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622257 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.744842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.744938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846606 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846826 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.871583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.939273 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.293503 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76887d5b-6380-4648-940a-bb025db77fc9" path="/var/lib/kubelet/pods/76887d5b-6380-4648-940a-bb025db77fc9/volumes" Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299570 4740 generic.go:334] "Generic (PLEG): container finished" podID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerID="1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294" exitCode=0 Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" event={"ID":"f1f88001-ae96-4762-b792-df1ad7dc11fa","Type":"ContainerDied","Data":"1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294"} Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" event={"ID":"f1f88001-ae96-4762-b792-df1ad7dc11fa","Type":"ContainerStarted","Data":"d9bafc4d0d2ea09f67830ecf3aaef9220530d0aab48f79ff2349255ec9748d6a"} Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.685033 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.691743 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.497694 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.580780 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"f1f88001-ae96-4762-b792-df1ad7dc11fa\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.580993 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"f1f88001-ae96-4762-b792-df1ad7dc11fa\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.581011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host" (OuterVolumeSpecName: "host") pod "f1f88001-ae96-4762-b792-df1ad7dc11fa" (UID: "f1f88001-ae96-4762-b792-df1ad7dc11fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.581439 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.585951 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq" (OuterVolumeSpecName: "kube-api-access-mdflq") pod "f1f88001-ae96-4762-b792-df1ad7dc11fa" (UID: "f1f88001-ae96-4762-b792-df1ad7dc11fa"). InnerVolumeSpecName "kube-api-access-mdflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.683455 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.878536 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:04 crc kubenswrapper[4740]: E0216 13:56:04.879399 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.879416 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.879677 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.880289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.987994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.988085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.107358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.205027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: W0216 13:56:05.230202 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fda0424_a309_4a06_a210_ff39f5a0eb25.slice/crio-9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc WatchSource:0}: Error finding container 9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc: Status 404 returned error can't find the container with id 9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.281795 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:05 crc kubenswrapper[4740]: E0216 13:56:05.282213 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.292687 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" path="/var/lib/kubelet/pods/f1f88001-ae96-4762-b792-df1ad7dc11fa/volumes" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.324416 4740 scope.go:117] "RemoveContainer" containerID="1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.324694 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.332665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" event={"ID":"6fda0424-a309-4a06-a210-ff39f5a0eb25","Type":"ContainerStarted","Data":"9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc"} Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.343694 4740 generic.go:334] "Generic (PLEG): container finished" podID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerID="38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57" exitCode=0 Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.344082 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" event={"ID":"6fda0424-a309-4a06-a210-ff39f5a0eb25","Type":"ContainerDied","Data":"38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57"} Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.396543 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.406232 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.447703 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.529932 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"6fda0424-a309-4a06-a210-ff39f5a0eb25\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530182 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"6fda0424-a309-4a06-a210-ff39f5a0eb25\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530307 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host" (OuterVolumeSpecName: "host") pod "6fda0424-a309-4a06-a210-ff39f5a0eb25" (UID: "6fda0424-a309-4a06-a210-ff39f5a0eb25"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530667 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.536520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz" (OuterVolumeSpecName: "kube-api-access-q6fkz") pod "6fda0424-a309-4a06-a210-ff39f5a0eb25" (UID: "6fda0424-a309-4a06-a210-ff39f5a0eb25"). InnerVolumeSpecName "kube-api-access-q6fkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.632950 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:08 crc kubenswrapper[4740]: I0216 13:56:08.361009 4740 scope.go:117] "RemoveContainer" containerID="38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57" Feb 16 13:56:08 crc kubenswrapper[4740]: I0216 13:56:08.361101 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:09 crc kubenswrapper[4740]: I0216 13:56:09.300639 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" path="/var/lib/kubelet/pods/6fda0424-a309-4a06-a210-ff39f5a0eb25/volumes" Feb 16 13:56:19 crc kubenswrapper[4740]: I0216 13:56:19.296861 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:19 crc kubenswrapper[4740]: E0216 13:56:19.298545 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:32 crc kubenswrapper[4740]: I0216 13:56:32.282914 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:32 crc kubenswrapper[4740]: E0216 13:56:32.284084 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:33 crc kubenswrapper[4740]: I0216 13:56:33.775751 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api/0.log" Feb 16 13:56:33 crc kubenswrapper[4740]: I0216 13:56:33.969566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.005475 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.089575 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.173093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.230656 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.384161 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8_8e96214f-a46e-451a-97d9-d448c66826f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.480583 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-central-agent/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.523059 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-notification-agent/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.595603 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/sg-core/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.607464 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/proxy-httpd/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.768982 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.800710 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.019505 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/cinder-scheduler/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.026331 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/probe/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.064390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg_3691fefa-c161-4670-bae7-ddde074e2892/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.209914 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw_928b9f1f-3a42-47e3-b895-756f66452ebf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.271209 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.437250 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.478356 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g_fe15334d-14c1-4670-89fe-3b7d4864b782/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.492170 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/dnsmasq-dns/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.686572 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-httpd/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.710773 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.856913 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.868179 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-httpd/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.081122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.213222 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh_3e117ddc-9ff8-414d-859b-0a16b4846029/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.360802 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-42525_bf3c8754-68ef-4956-a95b-c6751d81b5bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.447865 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon-log/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.682708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cc7d69b6f-dmv77_e68475b5-404f-48fc-a05a-ea18135e837c/keystone-api/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.685276 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_05c7ea6d-5a24-4b21-851c-e7d51fa61a38/kube-state-metrics/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.814347 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjh65_2ab3e576-ab98-496c-a189-2e79796f9e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.161461 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-api/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.202488 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-httpd/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.383158 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w_3a7cecfd-1168-4187-a70c-7b2151ff214f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.948190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-log/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.036093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_07256285-a907-4822-80dc-b5f5866d437f/nova-cell0-conductor-conductor/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.150231 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-api/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.302556 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4465f42a-9c2a-4aa7-9e45-fa28f78cddd7/nova-cell1-conductor-conductor/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.383887 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_94da2ded-002e-4aa6-9828-404bee84c146/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.414111 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lhwdj_58706e85-268c-4ce0-b1e4-82dd86872568/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.661201 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-log/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.949720 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.001634 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e3ba9a19-9826-4c43-9907-8cd8f1a4272a/nova-scheduler-scheduler/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.102794 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.168374 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/galera/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.314971 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.538208 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.540840 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/galera/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.701928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4f78f448-6577-48d1-b077-01e42c14758c/openstackclient/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.749704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b4j4m_ad1b2300-a42b-4a99-b186-7661bb410a36/openstack-network-exporter/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.869524 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-metadata/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.970276 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.142332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.170560 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovs-vswitchd/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.217136 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.363638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qnt79_04335a5d-7cac-4a47-982c-70cae9db69ff/ovn-controller/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.516412 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zzdbk_d66e0695-3544-4fd0-9d34-42bea96ea9de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.600305 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/openstack-network-exporter/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.702293 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/ovn-northd/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.830183 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/openstack-network-exporter/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.878775 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/ovsdbserver-nb/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.070584 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/openstack-network-exporter/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.082526 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/ovsdbserver-sb/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.282978 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-api/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.316909 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.414951 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-log/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.594885 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/rabbitmq/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.599578 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.670965 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.824076 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.866548 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g_9fa622a2-4774-4038-b9ec-ec4bc7f57a46/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.912020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/rabbitmq/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.098873 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4c988_2abfe09c-2736-49b3-b4e5-fb0e30deb510/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.124258 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m_1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.325383 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-87s8t_8c5c2438-cfba-41a9-b429-80c9ce563348/ssh-known-hosts-edpm-deployment/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.341645 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-r8mds_981b1e60-57d5-4a6b-8531-3fd31dd46fa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.634720 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-server/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.720985 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-httpd/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.752996 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4rgvg_8a769496-58ca-4540-9dc4-bd8df7e682fc/swift-ring-rebalance/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.867405 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-auditor/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.927240 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-reaper/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.002332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.174973 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-auditor/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.242519 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.319543 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.387888 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.401625 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-updater/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.451700 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-auditor/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.537726 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-expirer/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.627487 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.649056 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.661352 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-updater/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.741087 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/rsync/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.813718 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/swift-recon-cron/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.904082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99lsn_590a1858-7b00-48c8-a2b4-dae7b652ed89/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.087988 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_90aac50c-27a6-4ebd-b207-d3bc439dc1fe/tempest-tests-tempest-tests-runner/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.147853 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4a270185-f419-49b5-aa81-b6d254269d2d/test-operator-logs-container/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.281801 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:44 crc kubenswrapper[4740]: E0216 13:56:44.282079 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.289707 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w42sv_5add9653-c644-42d7-bd4d-10ecb8f84a90/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:54 crc kubenswrapper[4740]: I0216 13:56:54.513226 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_16622824-15d7-4ff1-8eac-85fe5d8da9db/memcached/0.log" Feb 16 13:56:56 crc kubenswrapper[4740]: I0216 13:56:56.281188 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:56 crc kubenswrapper[4740]: E0216 13:56:56.281914 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:07 crc kubenswrapper[4740]: I0216 13:57:07.927877 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.087385 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.112699 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.118987 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.341456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.343119 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/extract/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.344638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.857750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-9kqqk_069bdc0e-d9e1-4e93-a6fc-8aa439550dd0/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.379855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-9xbzr_90321508-9bb9-458e-ada0-001c779161c1/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.428670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kk4mh_7f22cc6e-3761-4336-ab1d-74d9fd88432c/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.638851 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-nl26x_fdf72675-c282-4f45-ad93-19aa643dcff8/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.012477 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-rpbmb_f0032304-8799-4a85-964f-2017bfd2dbc8/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.163085 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-v28lz_3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.257705 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-s8wc5_4eba30c7-3dab-4b8f-8a22-2dae642a6ac5/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.442783 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-z2m7j_fce48c02-3aa2-404b-a9a4-7ba789835be0/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.514008 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-44wdn_7f932811-4449-440a-b4c7-4817bfb33dd3/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.797140 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7gw4t_a49c1d67-8cf7-4429-ac73-da13d129304d/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.006336 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-7t92r_121ee83b-e7f1-4302-9455-4cc6f53a07a5/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.209644 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-fn4g2_ba6767b2-e03c-4c12-880d-90bd809d9b48/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.280774 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:11 crc kubenswrapper[4740]: E0216 13:57:11.281055 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.539419 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7_76134787-0eff-47bd-982e-16c2c4f98f19/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.002098 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f746469c7-kzds7_4c82699a-266c-43ce-acce-32c8aea26c10/operator/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.414042 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qzt4t_7fe65e33-ae2e-4f40-b686-454192d6b538/registry-server/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.673433 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-gclp4_6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.717750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pbpdw_00e4da3c-6d3d-459a-86c2-01a4cdb81e51/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.863936 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-64xmt_c6400043-1325-4af3-8c79-4b383441668c/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.017019 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qttct_3e6434b1-64ba-481f-b001-8a465254dc0a/operator/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.278586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6865b_519c5b9e-ed4f-4cba-a731-70a22209f642/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.388331 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cnxhk_04f86073-3515-4d62-a02a-c63d06ecdaaa/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.518112 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-58cw4_7666c640-a9f4-4e09-b79c-7fd31116bd79/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.655590 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-pbkbj_001719d5-3a51-4f6b-b316-9e98f53ed575/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.829629 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd688d8fc-7shgl_e749615e-a716-4e6e-8830-947b128e4e58/manager/0.log" Feb 16 13:57:15 crc kubenswrapper[4740]: I0216 13:57:15.419184 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jsfjx_d6090007-0c13-4ea2-823c-3d95bb336fd8/manager/0.log" Feb 16 13:57:25 crc kubenswrapper[4740]: I0216 13:57:25.281640 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:25 crc kubenswrapper[4740]: E0216 13:57:25.283744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.127471 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m9krp_2eef055f-7504-4f20-817e-afcd1bb6f996/control-plane-machine-set-operator/0.log" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.760482 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/kube-rbac-proxy/0.log" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.926717 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/machine-api-operator/0.log" Feb 16 13:57:40 crc kubenswrapper[4740]: I0216 13:57:40.280791 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:40 crc kubenswrapper[4740]: E0216 13:57:40.281633 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.437379 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kflg5_8b35e0e1-44f6-4481-a71e-98e3f8462bb7/cert-manager-controller/0.log" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.774046 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25fnr_a68020b3-17ff-43dc-b17d-0845940c0758/cert-manager-webhook/0.log" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.787523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hpjbh_beeada69-65c5-434a-af02-8e6b23e13138/cert-manager-cainjector/0.log" Feb 16 13:57:55 crc kubenswrapper[4740]: I0216 13:57:55.281312 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:55 crc kubenswrapper[4740]: I0216 13:57:55.565169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.413096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nrnvc_edcdba40-6318-4d29-a235-829e94bc8089/nmstate-console-plugin/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.585124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v88gn_3c0ee084-492b-46da-82b3-9c9a8e1715fd/nmstate-handler/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.633483 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/kube-rbac-proxy/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.692844 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/nmstate-metrics/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.853918 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-76m6k_afdcb81a-db2a-4c04-b73b-30facf2d10af/nmstate-operator/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.880976 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-r9sw6_b7ffd056-af44-4007-8de6-cc707902d4c4/nmstate-webhook/0.log" Feb 16 13:58:29 crc kubenswrapper[4740]: I0216 13:58:29.986966 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/kube-rbac-proxy/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.111964 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/controller/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.142867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.435092 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.447164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.471563 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.490944 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.690602 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.719938 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.742327 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.742570 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.899304 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.905565 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.911403 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.167730 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/controller/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.414521 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.455015 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr-metrics/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.503343 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy-frr/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.655939 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/reloader/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.680802 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-spwnh_2e220608-2271-4260-bc94-e4d206c718d4/frr-k8s-webhook-server/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.903178 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75b694c59-wkpkw_97f25eec-68aa-4b48-b40a-08ce0599d525/manager/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.053009 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7887f4bfcc-9grrx_4163a038-60ca-4e8e-bf45-028b04101fc9/webhook-server/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.207286 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/kube-rbac-proxy/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.789390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/speaker/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.876104 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr/0.log" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.930460 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:42 crc kubenswrapper[4740]: E0216 13:58:42.931558 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.931575 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.931793 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.933502 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.947109 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.128213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.128260 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.150371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.309746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.818859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:44 crc kubenswrapper[4740]: I0216 13:58:44.018649 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} Feb 16 13:58:44 crc kubenswrapper[4740]: I0216 13:58:44.019047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"a65a57223c4501a1de109d7a544a32c06da41978c8bb203cc209c339ca8971f8"} Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.027643 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" exitCode=0 Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.027737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.029710 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.038057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.168616 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.455750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.494911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.499462 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.679854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.686834 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.728190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/extract/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.844576 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.045670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.048938 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" exitCode=0 Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.049007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.068887 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.082103 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.199002 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.228787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.359346 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.565433 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.598528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.616684 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.799638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.819386 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.065547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.080323 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.087675 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gjpb" podStartSLOduration=3.6568634639999997 podStartE2EDuration="6.087656993s" podCreationTimestamp="2026-02-16 13:58:42 +0000 UTC" firstStartedPulling="2026-02-16 13:58:45.029508855 +0000 UTC m=+3952.405857576" lastFinishedPulling="2026-02-16 13:58:47.460302384 +0000 UTC m=+3954.836651105" observedRunningTime="2026-02-16 13:58:48.086859229 +0000 UTC m=+3955.463207950" watchObservedRunningTime="2026-02-16 13:58:48.087656993 +0000 UTC m=+3955.464005704" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.293093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/registry-server/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.861039 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.869566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.895998 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.157880 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.227689 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.393647 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.601472 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/registry-server/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.632579 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.710115 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.763490 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.801706 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.863986 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.904610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/extract/0.log" Feb 16 13:58:50 crc kubenswrapper[4740]: I0216 13:58:50.026297 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xsssg_db2dd193-ab4e-4011-988a-d516f2da367e/marketplace-operator/0.log" Feb 16 13:58:50 crc kubenswrapper[4740]: I0216 13:58:50.722007 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.045460 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.110148 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.122660 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.362183 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.433523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.478842 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.482978 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/registry-server/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.611715 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.660858 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.660903 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.806806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.819495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:52 crc kubenswrapper[4740]: I0216 13:58:52.182372 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/registry-server/0.log" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.310000 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.310340 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.369071 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:54 crc kubenswrapper[4740]: I0216 13:58:54.154732 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:54 crc kubenswrapper[4740]: I0216 13:58:54.223837 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.129621 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6gjpb" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" containerID="cri-o://bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" gracePeriod=2 Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.849678 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.954561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.954949 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955128 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities" (OuterVolumeSpecName: "utilities") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955895 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.961097 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84" (OuterVolumeSpecName: "kube-api-access-56r84") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "kube-api-access-56r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.013551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.058258 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.058293 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138382 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" exitCode=0 Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138439 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"a65a57223c4501a1de109d7a544a32c06da41978c8bb203cc209c339ca8971f8"} Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138489 4740 scope.go:117] "RemoveContainer" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.158829 4740 scope.go:117] "RemoveContainer" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.175382 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.183122 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.183763 4740 scope.go:117] "RemoveContainer" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225100 4740 scope.go:117] "RemoveContainer" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.225497 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": container with ID starting with bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460 not found: ID does not exist" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225534 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} err="failed to get container status \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": rpc error: code = NotFound desc = could not find container \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": container with ID starting with bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460 not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225560 4740 scope.go:117] "RemoveContainer" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.225978 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": container with ID starting with bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e not found: ID does not exist" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226013 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} err="failed to get container status \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": rpc error: code = NotFound desc = could not find container \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": container with ID starting with bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226038 4740 scope.go:117] "RemoveContainer" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.226232 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": container with ID starting with ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574 not found: ID does not exist" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226249 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} err="failed to get container status \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": rpc error: code = NotFound desc = could not find container \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": container with ID starting with ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574 not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.303476 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" path="/var/lib/kubelet/pods/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/volumes" Feb 16 13:59:26 crc kubenswrapper[4740]: E0216 13:59:26.387272 4740 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:44404->38.102.83.147:36137: write tcp 38.102.83.147:44404->38.102.83.147:36137: write: broken pipe Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.355638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.360906 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-utilities" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.370962 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-utilities" Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.371090 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371111 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.371147 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-content" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371157 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-content" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371893 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.373692 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.373875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.469319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.469681 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.470169 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.598784 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.702574 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.187503 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699184 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" exitCode=0 Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699276 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b"} Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"f872b2600a32c5dc9ffd8d3280c5731606294c2c3a7a9e4f4aa51a1b5cfc5bf7"} Feb 16 13:59:58 crc kubenswrapper[4740]: I0216 13:59:58.711932 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} Feb 16 13:59:59 crc kubenswrapper[4740]: I0216 13:59:59.727134 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" exitCode=0 Feb 16 13:59:59 crc kubenswrapper[4740]: I0216 13:59:59.727216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.208106 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.211360 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.214384 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.217437 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.230336 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390879 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.392079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.400223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.411508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.542695 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.743795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.772952 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxxtb" podStartSLOduration=2.245311904 podStartE2EDuration="4.772934407s" podCreationTimestamp="2026-02-16 13:59:56 +0000 UTC" firstStartedPulling="2026-02-16 13:59:57.701007949 +0000 UTC m=+4025.077356670" lastFinishedPulling="2026-02-16 14:00:00.228630452 +0000 UTC m=+4027.604979173" observedRunningTime="2026-02-16 14:00:00.771635627 +0000 UTC m=+4028.147984358" watchObservedRunningTime="2026-02-16 14:00:00.772934407 +0000 UTC m=+4028.149283128" Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.026134 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:01 crc kubenswrapper[4740]: W0216 14:00:01.263259 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb02f2d5a_f1ed_477d_b41d_7e1b56eb3b81.slice/crio-36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac WatchSource:0}: Error finding container 36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac: Status 404 returned error can't find the container with id 36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.761476 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerStarted","Data":"0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8"} Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.761940 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerStarted","Data":"36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac"} Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.786405 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" podStartSLOduration=1.786385691 podStartE2EDuration="1.786385691s" podCreationTimestamp="2026-02-16 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:00:01.780340923 +0000 UTC m=+4029.156689654" watchObservedRunningTime="2026-02-16 14:00:01.786385691 +0000 UTC m=+4029.162734412" Feb 16 14:00:02 crc kubenswrapper[4740]: I0216 14:00:02.773565 4740 generic.go:334] "Generic (PLEG): container finished" podID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerID="0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8" exitCode=0 Feb 16 14:00:02 crc kubenswrapper[4740]: I0216 14:00:02.773641 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerDied","Data":"0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8"} Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.211062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.388757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.389120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.389219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.391604 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume" (OuterVolumeSpecName: "config-volume") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.397516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.401452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns" (OuterVolumeSpecName: "kube-api-access-mltns") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "kube-api-access-mltns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492185 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492537 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492550 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.799943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerDied","Data":"36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac"} Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.800001 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.800078 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.864846 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.873383 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 14:00:05 crc kubenswrapper[4740]: I0216 14:00:05.318410 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" path="/var/lib/kubelet/pods/ab47f99f-f805-4d2e-bdf6-6da944e511a5/volumes" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.703666 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.704522 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.853410 4740 scope.go:117] "RemoveContainer" containerID="abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df" Feb 16 14:00:07 crc kubenswrapper[4740]: I0216 14:00:07.775892 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxxtb" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" probeResult="failure" output=< Feb 16 14:00:07 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 14:00:07 crc kubenswrapper[4740]: > Feb 16 14:00:15 crc kubenswrapper[4740]: I0216 14:00:15.575676 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:00:15 crc kubenswrapper[4740]: I0216 14:00:15.578164 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:00:16 crc kubenswrapper[4740]: I0216 14:00:16.762896 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:16 crc kubenswrapper[4740]: I0216 14:00:16.820070 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:17 crc kubenswrapper[4740]: I0216 14:00:17.003395 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:17 crc kubenswrapper[4740]: I0216 14:00:17.940171 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxxtb" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" containerID="cri-o://813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" gracePeriod=2 Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.471027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519182 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.520167 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities" (OuterVolumeSpecName: "utilities") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.526083 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f" (OuterVolumeSpecName: "kube-api-access-ws46f") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "kube-api-access-ws46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.621606 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.621649 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.653212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.722986 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951463 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" exitCode=0 Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.952916 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"f872b2600a32c5dc9ffd8d3280c5731606294c2c3a7a9e4f4aa51a1b5cfc5bf7"} Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.953020 4740 scope.go:117] "RemoveContainer" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951804 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.992692 4740 scope.go:117] "RemoveContainer" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.000057 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.011725 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.025438 4740 scope.go:117] "RemoveContainer" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058210 4740 scope.go:117] "RemoveContainer" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.058638 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": container with ID starting with 813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507 not found: ID does not exist" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058685 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} err="failed to get container status \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": rpc error: code = NotFound desc = could not find container \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": container with ID starting with 813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507 not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058712 4740 scope.go:117] "RemoveContainer" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.059108 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": container with ID starting with 3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053 not found: ID does not exist" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059146 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} err="failed to get container status \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": rpc error: code = NotFound desc = could not find container \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": container with ID starting with 3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053 not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059173 4740 scope.go:117] "RemoveContainer" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.059613 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": container with ID starting with 9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b not found: ID does not exist" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059641 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b"} err="failed to get container status \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": rpc error: code = NotFound desc = could not find container \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": container with ID starting with 9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.294364 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" path="/var/lib/kubelet/pods/919ebe38-6d23-4da9-a367-69340e2f8574/volumes" Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.164739 4740 generic.go:334] "Generic (PLEG): container finished" podID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" exitCode=0 Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.164893 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerDied","Data":"6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23"} Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.166196 4740 scope.go:117] "RemoveContainer" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.285097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/gather/0.log" Feb 16 14:00:45 crc kubenswrapper[4740]: I0216 14:00:45.577318 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:00:45 crc kubenswrapper[4740]: I0216 14:00:45.578350 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.964924 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.965919 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2mt6s/must-gather-88jwg" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" containerID="cri-o://df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" gracePeriod=2 Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.973034 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.292675 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.293973 4740 generic.go:334] "Generic (PLEG): container finished" podID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerID="df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" exitCode=143 Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.416286 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.416618 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.500411 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"edeaee36-29fa-4f01-91d3-e79e65f07117\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.500531 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"edeaee36-29fa-4f01-91d3-e79e65f07117\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.509712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt" (OuterVolumeSpecName: "kube-api-access-dhgqt") pod "edeaee36-29fa-4f01-91d3-e79e65f07117" (UID: "edeaee36-29fa-4f01-91d3-e79e65f07117"). InnerVolumeSpecName "kube-api-access-dhgqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.603741 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.651979 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "edeaee36-29fa-4f01-91d3-e79e65f07117" (UID: "edeaee36-29fa-4f01-91d3-e79e65f07117"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.705174 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.302650 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.303080 4740 scope.go:117] "RemoveContainer" containerID="df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.303115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.325196 4740 scope.go:117] "RemoveContainer" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" Feb 16 14:00:51 crc kubenswrapper[4740]: I0216 14:00:51.293635 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" path="/var/lib/kubelet/pods/edeaee36-29fa-4f01-91d3-e79e65f07117/volumes" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.151124 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152204 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-utilities" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-utilities" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152239 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152251 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152276 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-content" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152285 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-content" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152300 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152316 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152338 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152350 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152394 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152406 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152714 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152736 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152761 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152787 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.153515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.167246 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.206832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207014 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207060 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207133 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310978 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.319890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.319943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.320342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.330281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.508495 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.958082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.423771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerStarted","Data":"bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4"} Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.423989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerStarted","Data":"cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8"} Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.450117 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520841-7tm7q" podStartSLOduration=1.450097477 podStartE2EDuration="1.450097477s" podCreationTimestamp="2026-02-16 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:01:01.439188686 +0000 UTC m=+4088.815537407" watchObservedRunningTime="2026-02-16 14:01:01.450097477 +0000 UTC m=+4088.826446208" Feb 16 14:01:03 crc kubenswrapper[4740]: I0216 14:01:03.449530 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerID="bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4" exitCode=0 Feb 16 14:01:03 crc kubenswrapper[4740]: I0216 14:01:03.449635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerDied","Data":"bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4"} Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.850990 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.911885 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912180 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.925023 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.925042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj" (OuterVolumeSpecName: "kube-api-access-qdzhj") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "kube-api-access-qdzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.940788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.974552 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data" (OuterVolumeSpecName: "config-data") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.013963 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.013993 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.014006 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.014018 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerDied","Data":"cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8"} Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469502 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469301 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.574877 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.575426 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.575466 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.576197 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.576268 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c" gracePeriod=600 Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.592563 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c" exitCode=0 Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.592673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.593312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b"} Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.593357 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 14:02:06 crc kubenswrapper[4740]: I0216 14:02:06.979334 4740 scope.go:117] "RemoveContainer" containerID="5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1" Feb 16 14:03:15 crc kubenswrapper[4740]: I0216 14:03:15.574927 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:03:15 crc kubenswrapper[4740]: I0216 14:03:15.575636 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.221049 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:33 crc kubenswrapper[4740]: E0216 14:03:33.229191 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.229213 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.229480 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.231376 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.239921 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341481 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.342263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.342389 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.364592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.598121 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.104578 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936250 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerID="c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d" exitCode=0 Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d"} Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"8ae4cb36b51a0aec1859f8648c6befd7a501edc2ee851f140a2cd236b7a67421"} Feb 16 14:03:35 crc kubenswrapper[4740]: I0216 14:03:35.951288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961481 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerID="ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3" exitCode=0 Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.983056 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jtmnb" podStartSLOduration=2.565024981 podStartE2EDuration="3.983039706s" podCreationTimestamp="2026-02-16 14:03:33 +0000 UTC" firstStartedPulling="2026-02-16 14:03:34.938332692 +0000 UTC m=+4242.314681423" lastFinishedPulling="2026-02-16 14:03:36.356347397 +0000 UTC m=+4243.732696148" observedRunningTime="2026-02-16 14:03:36.976549563 +0000 UTC m=+4244.352898294" watchObservedRunningTime="2026-02-16 14:03:36.983039706 +0000 UTC m=+4244.359388427"